var/home/core/zuul-output/0000755000175000017500000000000015134424265014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134430436015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000266421115134430255020264 0ustar corecore0rikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ?YI_翪|mvşo#oVݏKf+ovpZjl>?xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]Z8HGU=y&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(կѶO:#'6RE'E3 */HAYk|z|ءPQgOJӚ:ƞŵ׉5'{#ޢ1c qw zǽ0 2mK:ȔsGdurWMF*֢v|EC#{usSMiI S/jﴍ8wPVC P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&ld\{ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"GypqI$6ʎ@lbx\<uV?.*E!qQ5m㎤9I͸,0E.ŊygcEl#L)(g4^atNbe7}v+7Zo>W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"IcӬ?=70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:\kfeMuVMy̥Q\ګ1F#șcq##rI$I.im򯚪+}2Q14S`XPL`-$G޽*}w[ #j*ٚ- DIAm<==UF^BcAw`g*7R(#ғ [K&#Mp'XގL=s5Ǜ>Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIgZ8z.e1N5u$3+ $T JbȣٙO iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i,F9$^@mdH֙toN1 < ҷBq/ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0mޙ "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc RY{{B\rhGg JGIެE.:zYrY{*2lVǻX֢a!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_7f&OC"msht: rF<SYi&ʊC:LIeHJ"M8P,$N;a-zݸJWc :.,i^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tTxmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!j]_L~Tb,r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-,3~JPͪm|$oV1yU<̐t6 T m^ [IgINJ\Оf*Z" ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%=Hg}3Ď89؟NopgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\[CB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(FߟEj>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1 2ӎԥ sZR cguobn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_4YS!kCL{qdɊb"Жiҟ+ied*k髲jJ Ӿk+rBf*8f- E_lq.JVh qC\ُw4k1%N|T$d[ kn=V%Őnaf{w5nrC%іoU ' 4IŹMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My^+nٷEY3Λ5|c6お*%>Pm|_>9|dUA"{!$jKx E$K3hN(tÊ-#?v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>>F~⯰&jo]?.L3A:+ .3}=uvKc ٹeSY\Nh淓籋K)-7R(yb?8E<;WK+E{Y3 cU $O,iLacoW1/W=-kqb}xTL̵ F8ՍXo!gqgߑZ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|z)~?wy,u'u() C>G^ݨ{e!XDŽ:&ӄMu., c^x{pʹ`OW[F%mgX(e..`-3})N2FVg{yQ0clz<'&~}eҏnr_ _O^W ~woe{椱I |p+U݋_=ϾMZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{݃x}ߐo>Mm8S݅~(EX{S_uM Wi·yT"^'~i6֬:v~m!҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA|i޵,"e_Bw6XǙcd &ٔzM /=~HJ;-X13.v}U]]nV%ZP{ ,eן(*L*%ߺH0TP$[.id~jp~a?T]xBCUBDQ,)" Ddw_`EPeoYrƧG.ֆ6"Cstkvsq}' #lt}൪D'.3߄OcS7+ݼ2+ݽ+[l׮S4Of8,A [XP^GsD=7*OtMs=)B_$uX}+iqN7]K7=s̟"B$?U衼3Sc@%{Zd2R- dnp?uU3%-2&nl{LŒ(cAdZɴG%"l:0Qqׇo (,CkYʢRkyRE#cy.c#l#4̀YF`~,"[7-vȰ 3‹oơ3kOfId}># ®+ {~<(UwpH'@ ȴ)xlibCq`WVO"  5᳑Y P^Hky[eW#/x4)XE1Q\W5_B6Zg~VhH{z.|&WY3^wyex3b(nG- %j9ϲa=:4Kd|bA#:F+VrFO;oUZs;w>΅d],8j Vy,Qg1o]&+K>K!,ayڬiwQf , tz` NLD.BAc{9J 5'? N/IPkejyJ|2{)S0/Iܯc ֶo'[Fx"!NԖ@]⽁㪨5U;0vo|3X`}j+4/#(XqDCSօZq1{od;u,\f1A,?Y q~9i;Z5>Unhk} (D],nZL;x 7_n"OH~Բ?]| 03׬ m!NDU!l8gmlC>8Hg?~duɴv8,O\`LlIhP+OAwYVc4?>`a}E+FG>+xޤ ^!dz4 I@Q$2`)?M(Dִ9VW u`Vn>~8Q'TMeQ:"a1]0 y1Aat+ U 4 =D骦# śqAܗ}#Ol[E% C`Sq@9 ,O3":MJˊ}:m ϟ KbRU1}/<ƫ9-=}4\љ&8-DSF%^`žu DsNes' !z} CKM?1۹oUkOڑN8Ii6({LG9 y+?ʓobgSEdd#{{TK=1+hxjcdѴ)!, 2MbVn*iP7m"irѠ}zowWqFID8< CsJA8+EjcQ8 d>>EN*LRw2)xe ~JF4( RBk;IT%gD44]e#Y/4ڞBc&ZԜ/c&xND X#)fBhw.L$Bj:Բ:V*L麳 K ]E)1hQJ# gQY3UZ~]VFI}a!̊ WSD@p܂694*w%/FF1wۓG?M" #4ol g_|!ʜ" 0#RA 1b,oe:!TtOeZ RXӕ.x :bEMN(𢭌m. 9$;ϥeuhi ܝr|vmC%B"FyW$Z$T0 D"g3Pڜc샸D&Bgv">~RqVBK!n<85H/lD [E}uP% 9FTRlRX% Q$b^'1k?~g FTC0'g$Իv`.TۑnZU{ʔg֎0ogLleq++ p *{clLi59KB`Nq,ˤ-^XU% ʖMUzݝ'ixD0M[T(k!B|E1I4[M#Y]k!r$В'P͕ʟ\@;Mtm!˰#ьŽ~͛[hL3< S)jcoiU>d{1*rGSK]Bm.gsþg VhTM8nD.tWK4=>b 'WK"pM=˾2R2w$œZuڸ0YxW}uL툫dٲQ5A&4[Ms}JTiRTUFs]yCYi̒nCiK~4*B3mu%p܋R!H6Nr 8%&z /K/w\y6Cж+q[γï)γa))7U"lW@A0͋"Ƶ"<\vFA]n|@e:*X@$o .VWX$2¶ TƢHV̶iaHb^T*E4dilՍ㪉 3\SIFGg@Y3^k[  Oa Ȋ/Ό/gƶҫ6>>r%.ٻ!hsm&4D Acectq}wM/"KA4[˱5񛧾x{|>d啌yd訜ol,5&YIz*\&:x%6CKjuQ,[he'ѭi]Ikb圛ւmW&CGľF7c1; ,+<1ݍ}Z:&z8rT[ZeDwptx r JavzT*V"\wQ#pT\I9LRʾMS{:eir֣89EqTi`ET'!2]Wv1i9ٴIDg kDzlGzmɖF̴UVӼV]ߑXSYJuƩͱRWhߡwUp|gtU*ѥEb]ىn{۱u臖_W'0q i귇m4 & _duCbz U0*ETd*iN> w.SG0ުf|/DꌂDӍgƻGy-әK= @&[h`iUnba Qܟ[~.taZ}~N=+}ʻeObhe-Ȉ_#IWZ7\!sGv-*S]a+xl״"?]| >l[ 7k= r_|$uٻq%\D:ne>eULQw8N`8jd&MCbߚ.NeV=zI8ϖsս$[PWNRU j38D}"mx eZ7fVr^|UJ4o <>gۇӃ? 3a$8K'BvUԶuκKrg-м3 /bc9=aG,A?ME#,qG@长8 (>z'%awy.D{sNJ.H;=|F< Gyi+amK?3D@ DS'b8(ݓh0~O7Qh|u]8|QkO@V_%ghDv͊HOT`CCOKܧvy!(僗m^QQoBЇSu,yKq"륋"P|>b{Ql@0qt:.~߂aV<2y`3`dNO!/QF<ב|i#Cm^" q{ NGN'c |N$L!F$L.ځFƸg˕hO ?deNP?gf,8nƒ< A |'BH&9uZ7&ґJJ%au,Q=r 3gn X 7āpHtLς72ϊbI-ruC )'C?K(ʸ&c&"wC ;sUi.3ک FUFvn=Ezp1-́6rx@~u X4Atߨ_dCa8LpEjFXh6cZK-9z>]E&!"iQ; 7 @+\ " "Rf3\⺸J+9[BGYTVթCr0h\-2Aw t97as >sA5k]ˏFH"TT"nآ8 }41$*1ʺ'z9 Y%{O5#]&h4oᳶk0C[ns&V[slK1PfDXJ8PݭR\0T!i]7[P:byNUJ!֭7P+'G@HG%QdƖXȎ<ӷ"b/\?&:a2۴rR@YPf^`MQ34bPNFr\7YI􀭓H㺇A/_Mû)hX]'̋i6Ohl\ `1K2ED(W!Jj5\KUdW_jv5 T|Y].'yx~KE1d UuRQvUp^6}Jԅ(YO}>&*>8Plb"(3 (`H_͝F5XS:l%lbQrxr[:KUxyo%m D]KgBΗo18-z^Ơ^0mN\ߠɟnq|"~;>NCFUK˓KؔW^^'xqku{.q\ݵnVo`0)gg # *4&gb!%0шg2}XolcX6Q4l kz=6'̀&/8GXhW)p\c]Kܵ+~ a{4sa*5T͆h`u;7z\kGWa@Yk*6\}z0e[Gk$ Q|U8g#4dYk={ wn Gi @Ϸ)MqXtOh|`ޡi*!BVcC6EAs8Tx_ԐpϰX@$fgAU8PmӐG0ECD2d.Ņ* [EluC X!xΪ,l6lY(o,$À>^t=( u䔆NK YWV0e^\6n`F<UFQRz}JnѪWK& VwDin4clkzNB l|}T6= *ճ34V$9 b:oUD $3A^W~M(kʶ mO({lGB6RM:=WٮʶUҵO۝;w޵Z[kmO4R) njoA=w$ނPM:O#ّPg B6>PwGB-ڄz[mO4B ooA=w$߂P&oA(ߞP4B- ڄ[lOh4B 6K SBWj `q@TX&-ݭ+nCw_׳q^)r <|m7kdr9,ϗЖEKW4-O0* M! wkOp, ǝ!B'8z*wZkZ\-, XF" cL?Bo(Nz t8fi<,uoLK63ɼq\@ ĮNu/|3`7F<=);EhiYX~bV US Оduzw; +T<ʓ,)ͦl zhr07g"A5bj{뙒, guÅjlGE\ȦkS)')d.NVtp^ 9iKj{U4%lZXo8Zx* ]}$(dp7(]D ҅R6Y.*2t蟘Lgp,je9n%dD@_px k!`\DRb!CYG՝:` [AH S>,qϟD6F>=@͌0 Z{@Rnd嬘R;h ȘFa] >v!9pDŚ(- Ku[X¥uU.j]dfXftUtlDUzmdinG B~_6"^t<k'fU.—0C1ԏr8uo4 X B25Md+_U h#,QS2~o ů&(/ÜPOUWҧja蚥NO!c64ZmPz ->2?={[2˺oFҽo'2WKǗXcz`9iEeӢ?s?"I|%*ƶNސϕ/D1ڈ궡>߉vH !SP0oM !c"_x4DD)Ntw[;L:0oCѐ^vQžH,w^&bbq0|lRcXT J`aG0N5bsMKaw_fCqSXl(ՙBg0?>{QAI ݨղwI.Bmj;)|:g=TOaH+4@s6?i!8T=b#ZE&XV5];TH1x9 [ojlкɄ#},ډ|F8Kޕ-u$_{j_:b>`"j u tVRH)ਪvRqNQ^)Vja"_қ_o;%nfm.o[ѯn뫺%ƬK`3Zá ܺPF Nnק=nxsqL|Es]+/c04|zk`T|vCpgdiHlב\@3-N5M++ *knLL[΋- ~X1l}eh]6@v oZ3)$Urjw™Q{}txOSSyJjBjZɟ+{i }}XS %\b,Ȫ;d7(u {UP,I=T>%&KQȦ i]4Ke!ۿ8n*p;rj`Mel^4(0z~Zefw  r#jx,n٨8Ȥ7_w$S dAK'Kq @Rr8ly3s/p&34d%<_A(<^ݾX\"dʺV bP:j *:!X .C /UxlZʮGc"]Gp袜 ]z 5g<)`QmO9ZU Af;AdL[& qՄ*<1Î#v3ɹV&Cd60& BK(f(' tlא@xwGwkG3k.CM,i&?՚^\6pX:qwGHrq|}zdI X?^}ڃ{M1@%l.#_U6`*p\.A -J^6&_(>`89י蒣Hxr4:K^s Fl ۛѮ9%<2Cȍ'zbtٌEY%X"\`Xq?2dgK/HCb #tGu؍ :dRcU4)MB6RJ7IJVX!} ~nf}\9(!f!9'uŠ1y'z[ 1‚k2h eoo^YͫP,y8prW6h<4}jGLG]Q,<~ݲ2my X$}jjO\";i AusEGkOj ^ZȢ3]VVhR,1B?6]Z{Te01)Hy$0:IIjX>qJvɁe\#!xx%F C31loQHb'8-FkiK{-aa kS(f2^]TjWOQats }J$ nw_:|᫳DXp|U5V%|c kqx(N<^/ؠv ^jUbN=zeGҬ%k=XṞxed2qƕF>όJFb91.3hG0×NœrBn=ٛ,>3r7}SZԓ'34 TpuJ26μ[˂d+ouT:T@g !J$DsVFQpa={7@vf]nzAY70/7_YpӶg!}1;E۳+U d _}01f;ߍuSU(`̇m*W-Gv>(-FOK?)|>/ϬvZɖrt1i#0Yev-ƶ)5ҙK~:]aQ؀ΌEd&Ks&! c kIt- c&ro}jmcty s=M^$"P+/WAZכlUVwUբV”ZG/=p_i5YF?Y>j%k{jR?Gj G*٧= 溔7Il/$T,VR; /Jx"~|Fx3K좥*GJQPY# ˥2]NͰZuQ {Xpw1M]i[x07Bl"qGsJUJE93ҽ꫎jO߮>߱pAse梙M}_Y5xΥrW%൚ xqNjcUuD( `6;y[6 ,NyanhqE1N^7RwBRJOiM=po}nSx;6w_YxmoO{#1K1sܷmKг׿^}}",`YyIBi"0<9BWVbZ@C3awxc[65sYT:һwY2B%Q| εj×6)5I#_i;wJNn$t)XRR K}Mvx'[4sKiLtL7ZW!Xb,4; D5&7^JWT)1ᶱ8#J@PׯsF K ƪ:AxU? etT>N79vYP7nwfg.]e(ù%aL9Y?[9ȕKBN=ob+2<=e+U(f&6{}].$?)Sj/xydOCZFlj7#)Ys~y}CGץ G(]g+nKfNnp΅>T*c-DJl^⒦68,Ws,:8sU8|>.Ya*KGwLx]NdU@8tɝ8[fr:~yArK5WY"JO0,t@/*=UI0>~eL/NxTMD*/{۸m`u@P>>)D_DzMdDzۑHI"ؒ"I[tQ Jٖ9lk~_0s^py ` >hմo5-(o B%d9 p@/<uW/V+CY<ˋ!Zuwm^k4v (~zG - VfaxW ~mX2PwbL TS~; z|ќ4 uY.%$ 6_PxpU|YKO:_sG}^rk%1e &N9u>S5a̔V&.?c&_FNc^%gw,U䶞w,rw&m*yk)doDYc{Fyр"U3{{^ʯrYrǥd1_I+j?IJ/43/CZ9,Z\"vՏmh͖l<4'<ꘌ#;u..?)wDBdf`YEX>8#$]wkUWa+Me4P9hCo+c+%sK=ckC70+Y_1=K& )_col&7_OI.x}W7.xI=Ί~\ebeX( ?W*ܼfĆ+o a(P/ Ej?>˗7yK;78ƞ((\ʞ`/H'?96>A6=} 1d4X N/qN'T;QٽwD%/=wQ#Fr0+𘟇7&myk$;@?waláx;lfΖh`=B4,U :M1R8 %JX*o jhDe˸(˫q{k/L*|BqD~@=E;L:+z7E)>5ڇN!k%UPVsƒb0Qƕ *rY67.;Bhc~ M9Hb{Qb9v LZ1iO70sT]w0K3}ߗꇪ`PU%~WFXM P#ȫRߵ&Jk?,b] q wd (rG'J̚Y_@Mgdx1UV F/5RCQkV,ǟdupQ-_wu'!Çz :ȐR_ EZݚ}.xp̟\?ŨAi/qEr"JyeᇁfJxyV"rߒ|zuBfjп<* J&O(  & & >+H[Zh#xy/$ bQ k-[giaǓQ<x~Hlz6PRBN~fݟt6ɋ?E$7{!uzcؠP@7 (4 qՌ@7[>G衿{IȆnĺbq۷*_ӢһBڅ?ދ#4L]]Rd[ R&[ Ӂq&'LXe){-Ovr?39iVZ"b1JGtiОIC@i4cNPŽSfkϨ[P-Ϩ[)+-ENu~pmwh"Ҝ)`]V4 4Hbפ3:iC (5:Th,^z,^GD]y]yU]yE<7~`G# xvO~dn Dku37S4/u`g-fKο>Rs2Kۏ.н}'?OW.Pu>`#>NK"W˷!}ޗƜ I/9e?d3З|_( =G;(~GUK7 "@6.#d)~D;GנϖH@=s!ո?ΊIe࣪"-ĒVuǍYD)߼&Y 8Dw4M|Q:/m}ԝ&CmEPҰ`ۓ͕-u!H!|, \ XWrHMk"u9IĂAcS cネ:3Rl0Qfz/Zbp59l5mgnߵpҺ8iQߢTq`&%*ZA\(ZQҦh0yâEJJfi)yM8!Bl0r:Bkˆs,c r*oJ)ʅbOڅ}j]RW#F6)x] ŽO {nz= - ٪™- !i"̲q%wkcoOlS.. zn4 ˖s=[M)isg֝^cIOgи9 p!h+nFÁa?C-+(E)e :>X%Y0!WwR>ȸD V Fiyb^LS `&ǘE`J|k7s|ǯ؁4giE:pNaYS+FF~2zq]M&,^c2qAԘi%[2DBe@j7 ,N1 , μ"WEZtoxo@90S?zơVB`! (c.0gD },e kdΨzkHE/'RIv{oL 3T!ijrBR,9 IcE=@-Rw`1Ei'9RXIxꡑ2a;* ؋:ͱ՘@v'𠩰†, h<'O K$@Q6NxLMj7|ЙKӁ>5S4(#nQ kRĐ'`7*Ib"qnr,M]Zi5A؀ 3R8X)dpiJRltu#`ZH3F IAk15AUA@$"Щ5`9&F Sm?`lV4WG -)9 R@ /4 4m'K;5fL)6YhdFgRv{}ʗFQAX .ZmqESbd{zB"8p гKQ6j11J"#اji\m͑u _`N1@[C wޚ2|H$BRZ]oWYp"M ЦA%Arc]dJN"pXɒdԯQ 9Xʁ/9'lfa}<"Rd9I,˕{v5'ꄶ#%Z1LZӁ%Cᵄ]&;b,j*@#AIf$DVa%hEs=)&1V̑1o{'m6)5li2\NE[nMkPx;,n;k r9N8ͽF|9o'0=MyJ \\N n[Z.Z%Z!*J**)ؒ ,v6荲V8fpN=Z ;^ DuGjMrR5TxR(iXt3uBrǠw<py˫%Zn:=F Dyi߾T#ږa3ܞ4ZoH4tsV'JԶ#qcXճr#p,.OrW~=̫Is7y[ZՉÂ*X9h8_Mƫ%eL2աK xaLsŸ>jgPe>H|UFfO4팧seܙ{1ʯ`(am)7avrVgtoHhfV޳|{#&6ihWJzk+6昝%LBc"ewmІbnkѩeNwŢVuZ/w= rryF_)sv~CȓA:z[ևc8aG؄d];\,Uq#Sn*Pb0fk~՟| Zc`f-*:FC׏ke dfPT{%Ϊj{C˪Ţh@cOo,X^!0ݽg8>:jr z.v+tK wcm.ڤڔ(1]R> N @Rb˓X E$|6M[& JJyvj|,wCnY>T-GGNe ;2Ë( ˚Z{ļFZ0f/]nVf䋵7S#M˪G0#Iā#XF`dt7܅<ݭ;50\>I#e]0*A/2!jݣwqy)@rKT~C3KM{ՇQ+R0t%ھ͛G׻wOLG*0i |uG=X#uq;{zߝv8\p2]#-j8ݭGY/à #ʯ0/n0|N@L3\Ao^C}>A[`?Κ+ |gLEMC8\ÃЮ,7v׉\GVW8a¿l2sX}Mb}}x4GҖ6D>$u܇u_ɾ=ZCy=v*)N B3V*'8ZN.Ul[E¤D}FfE150yq}08FqnZϖ$NHoeڞn˵?farCi>, DŽ2]!)=+25C/lKL`ZGT4jFģt9LCw\n0M#jF76G8DjDەƊ$Fx{iA#c@BW3uqX Xv|CcL#h).j"NKa"fѕ alR[&(u7к/(zW /W>|)"[:E__GsF5S;-݉gЛHvj ZqG ޡ(66-pE-{[W6ⴊϥChxs_H2_NB;4M97Iڞ 8F)Da1BR[K0E* wAV]r_zz}e^|OEbcBt墷^Až61>b/A%g?XZR6Z%ZR]Xk ݬ%]a-dO"+t8c-V{kZo&}c1UET9M|D0 LE_m=*Gb@"TZf'HfT`vN^G}7ǯ,1x'Q&|1;\څߢ'J_IF츧ZG[DMcx<썚'w`|̷eПAD]=Xawŀ__" 6*%ȳ4GP;mϡ`ڈґ-(&W]Pr j)C/[0~xBvo'.-] .#4T(aWy`a1s6m6!;*S/JYKv0M}8#gZ Lq=pfFEYx4r  jϟfJ/aVjt wחb.G19Y>{_0_)\^{WcTUe`2b"ȋz!&^TOb]ϣ8ag*`=52BSY}ZLsr圐9@ %"EV@} ܲt0(qo|z=xƀM>ɸ׏L]^ l笂%RLI^*+hP% _C/AYd/jmѵĴ1ۀw$:Etz4)?D+6Za2ָCKxY%f()^ 1nR  tUHz9'Srx/s;y'+2sq |Lw槯5gE]mԬeij9guSNG0l3hɇ7L yaׯT,ϳzGK.Uvg5m/l lečڵZsh5U"hFD L5eUMm7Ӷ8x4Yf#:}pt=p&&zkfc3o?80/q)%{覲}4B*}=jm .,-0) )3wB+96Qm4#݊8>݁Iu6L[X6mf;y3W32A ͬ6: t8D, "L^B Zb4<NPeZH9Y3a))!.2b,7҅э&UpV 5"k̘^b +f3ZgՅe69"y4Og56lXn0}Oh'piCxkiÉPuP&z 4- 6!\0Q(Q`V`:`ssiԚ$Bè E32(XqH)0h--h.jcc(H\Ɍcr M}p(e<(+D&SfJ3%{8ǎhagY{ܸ+|J@>m awq;O[yF/[-Z#T9-dbWz*.%ZmAUѵ7[%?qO-#stF;X]dv:꺴YwE[w[g(f5av O^3̮f kU5SգjQA7^U?ʶze>kPl\ g+ˬqn ̙Cg:%"xbAsX W%eH/֔欄1(vyqV|UUXfwV7U;)iQd qrQSǎPoWz/,& 뙱1$c8'ɹWU: DJ@%Fu9uv7~p/]9vʿa0*0v i)*>^mxQVieئGEyqlfY];je"uYaUg> > F^CCCikJI.umA >_u?N׫\sRyϮj35牆Ϸ_q^50UB5xNL,RR#-CQ$TFt6't+ė| Q<1C*&H-ŨP\0*68"x d91l6X?%GN.ae,&cUQzj=_Kc]h|_ʙ|}PANdE^EE~(;_Mæ!m>ۈ$̰LUCNz}/xz%zF6 6qG&ڊ#gf+CZ;L^W퐏flY ~1oO߶mY47ċ٢ąp|cN:1y-QnC@<ܛYkOy/Bٰ>o ?ܼ9Mn~DlLn~Ȗx`:M;c ƜV`O{iuS>?^hJcu=m㭨ZXTj@>ss3~ɂͺ}[jW"/IĵH k ,$VS] ^du0APf4|oZ-c mx[ ` ?@ ;w>4_N!jSkc#EZ%&kpdž9Z$%|e!ei}>z .UEB,X3Sr\Kt͆Z}677~*6[uԯ~vC6fO.aH_?f^K-}u빺Wu?\Eכf5c!46 \[İ5!+hb\͆旸:|zW .^1v6TyiGEJ<ָqNܝ5ԝ}|MTReG"MCW>M, siBƷ.MKӘtĝ30 %oԠKyGvEJ|T27 uCnVzThj)Y(FFdԱxcw{Nv;3L6d>{<[Ac,l(\4q:=o}> Q|\GofEۈ8S 95JͩAԚwCZȸiaHK\TeC[ˈ" ?Q!nSDĈ$y/Io C7ą l>\r ]KTR.(17U8S D%AϹWqx>Fyc>P_0\U"[/>>+*n1YSl dC A] wJV[0K_aԄ '|[5 8 zdmxz<'Z% fzGm%3|JIam׶Im16[FF:`xn"WB,YHd"H*3V-o "4 |(Gܸ;~>'l`2VeX[N Ү:LYR<(iDŽ%#vwׂ/Ƒ#iZ8Y43H5ok{AIDm&R<"g$C9i3@&i,AC)LdN^N 7-xNzT. FvfQA5Em !PqA @Gԧp>\[%.[x6vV%3ms]FK@";'擘r gI;ZC4%VpxG2 g-Xh6&QgCߞاjyXUsT@=t?zL;R(Z= 'z)c#F)+ h76m@BB}zzD1.A >8֣F{]~>/=lxa*q"FD , VGCrm\3ZP0{@ vp5v)Pr&㭮9bQ19$2ʲ9w`.rz.JjSN~G%lqkR~#(2[~zHrC˅f{G)/Ȼ~' /wmJC< ǔ5r3bY ,0u@"ƱZfR |'tz;!P(׷q-iAu#H&DbM:@V~; \1O/{4&WV19 Uי.ڪ1!G.|{W$=07:+aUW=B4[ĜSj*č0z䗷(KӇϷ6RL# t/Q1+DU(0Er$8崽"e31!KB9i8W `+>tǕKm;h"#fYӾ%dl;z JbyDs?Cc4)bx1u=< IwUDjVD* A5&G0G;G>AGKAAw_6 56@_Vh?h篎!gv:\UgdFK/Q\.2.Qr'=:Gieӕ cە  .W#xumf_Vя5yv.%,`W˸5rֳЭi!'Z@_m4- soY<-ů@EH)c_ [f!dޖy[PQRRːٷ$/L4.İ/z1\'Խջp/[a?3U >V>5{Ƹ_7Zq5VT~mŐHˬ*,l W<4_p7_V3D6UD ƨٟp_:o @]E_;?!sx9ǵs`m9 r^E8j(+(p$ xS=4Vxpwzu\D5 S~НZxx`: Vq vj,K,ukM6e  2.!u5qkF+5G!jᨐDn6Bs5ދRLe}{AK{c/wT;T3xzV<@T*Fb}݁Oa@'n;|ťEn;Ke +!ɲ;ؽ'18*(,09N0$Θ#Η#A-&e 06e ź-cjŹ AwV13nsp^BTKR[;l߂$&B;-{Ojo뱆C'2.=1FRؽtGv6` 8/YA_nWQomV7` j>#m>>U]F]Yr+q/CC /9A@TݶV/A{H.ZZ)U) <413fzQt5(8GeɘM;dLw..7y!E;dL}9z}iő&^_;dkS#RUNm{66 򾐋 x48 U$FI5""]ɠ NU-Ŕ6rnȣ "S؊y/R6x'ځ:+H躜bG{D]SPX>/Pn4@#N FnBP*ºCƤ!WMwr1OL l )ܡq"Lչq£9/O&BoQaD/a<DTqJ 2/4 .8>V`unΐ2.!ALC& yRJR:##Ȁ \0?@O\&c]?XӛA5j :<+, i4}+Zo2> AB+u}2s5F {ϔrQCvBiΡmZ.4NqJ]Z%/B,e\y.l;twA%y_Xnf/j4=mdX{6 'V0(rCzҺv+Jm/խ9J{2NJy+W\cMw .B}Ϡb NR @&R  337#g$o#Hv\GHєbu0 CƔܰɹA]ۧ'Ȃ4OVS5nϠ܁$!bXevV{}BB +vȘpQ!{4T>U>4Fi(hy РBآ\)AOSs]γoA|f9UGR'E .љE=TxRqRCp2ղᖣd!^0*D*Jvu+CpSd=84Q Q_Mȥ|%7UyN^Ʊ1G1/9nnIs͛-z}ӹnhu\>q]zWQlv{?PHBw? )mW7 kE:d%~FqTT+hşA}Vža?W)`3¹|5  e }^ ̘Gvi(Q.O?0,8bt6Yo.χ<.VO2K!dBuNCM }As?kF`2DCƄI1RfƜv?Cld`#umȨG^vȘpe\TZv]1krǿl6!gNo*jΒ\A8wȘ4F * eN9% v?l3'x(b&*vTOx#̞7 t@Y2&jh-%"ru몱Ue#&w+a3Q UtuJ9~F q߂"f sbbs?_e~8~s>GX>V~F:44k%^wyy:f-ڢJx|x>>G wVKi9*ך), }R2֙ZɬcFgZpH4NcMBH*Fm>1U]qˑyz;O?o%ݼJaq u|2ZZlu%*>pA RA6ku㬫m^,Ңy^ͺ*%ڋ.Zi. Kˇ3 z*ӝlo"-`:d+efn7pjr/h%h=:d"KsGpg,Y^;ehLUUԢEaLTӚFh V,mOk"f˔FQ^wMt- ֎ӹ; _I1X>Fc>H)cRƈ4̚4Z׈bcl\?wAv5DЍ_`[bU2cikl2~;kaq+N+\~B8n[VRw;B*ksT_[?B~$'4ɢ8Mf32!QH k7[sFJJhJAǯ@zܵF/fk?=`0(uH|TDS *—Ņ]5[=DX@x LhHIc0JxW=*)o7Qxi1!08L 7A[{EOӼUy~zpۥ iFj,5|Aw!,80c@c7ɶpN!햋C[+ J]?oqA^@ v̧TIAu¾X(d,FYnkqqCp"~I+7U6G )?bo/MFEM(388ż41J #ͣ1,X`^FapLTm=4Rʯ7:EWn#DMRxFpB[->1]B \8wϫ3vQ19K*J\54E3ʨCFYpVY!Pyj!08LE0AGpP5aQ@1}_Mn1Ka8̑ϗ/& jrYDahDmv4 !h4Fz)A);ls޳. &g4tQ_a.-ȳn.mI^jZw2j$4EC5inRVrs^Fap(4R (EF )hsС혚u7)~lGf\7v, 34. YoEQ+,Iq 65yr./ ,HRu$"K5&iȔX̣m6ZBx&k c%YA'%96t<m4Qx.-]L=PAEw1e$cu$A1dlk%E )ȼ. e!,wg8(XFe ~Fbt('@iȬ )vmiyCpT!02> XQzjfY3 u)M-A֢ʠL|IZYcih5kZo& oӴoaoQ6 ?vۿy@"M_mBͿ}Xi疪 G^ _Rõ{aI9ނ"2ǓO{%mϧ^mqw=HnE"\=M3 DYHr,߯(Y/%C]bX|"}LBeŲo|zf~̟ȬݏgV;m3IG<.`o4ί{Nx\#ӪZxj/]{56nLP@um|Of뗁Bg v~1%5Ee{y/,$2"B*zD45>|O(E*P M҈)Ƅ+8N@#'|"T*t/JNV&F^;]OM`ŮX;P)^NtQd1k}:^כܠT}_1@AdYr9LJݐu3.nS3ߕʹ `wUԼZԻ}?[mFǺFndqSlZ}W3>:앟阪شlr_6_{4o _펯r7ԏm1Qa2sXbgVv NdA;N-(u' tiM+m+Wպ!> Nho,M JRT%53%ú2m61|20Vy5R[CX;ŮjY|)70@ڵ 0 9?o@a 7k4jziXu #%gv^P͗/z< 2s$J7 4n쁁NYsm(2%Ɖ-_5(A:}^6imSseĸZ-"{kb2 F˻(cZ5Pn!_bmĘIHT]p~m[+0X6b̡QE6mT |kզw:bN.cb'ݍq90e9 l>+î֤{ W=k&',jĭӷWQ)+!:I[B e:u'ŧu|bRڿ=Y#P<@ q1fP=$=ge-wb/RYe?\Pi16mLy C|yFIPU#{n螸е+@9G+@2?xHq.ц(_Q 1Vsh=UU2[/4 X})uv->C|qQxRl%L 2m**h ijSa%sʓ0ҐyiKYtFFULG {?}7 <&}a`oCE ºq۳edw2R0c 3̅O bw2HK-Erc=@0SZc fkEE?^`f-ksOY=sL.Ⱥ?/ I#捂 B..,=zؤg= q18 xd1fUCwshC^:"bF1(3M=.rK4<>sρ=4&y,2i]9|"T^_mK=]tw@0do0=C#?jD6J=FTTMrtr [kŬ-ںf*6 s5Wu҅Vye6̵p!Y7"K~s~nGտqWT(ՄBx XqB>SѨn\]rj[i.)[Ҷ0}WC{,2 ;g_`'r6k֯ONyVL[pn= u_N7 cjCީF,)mfϯ̧"8}7h7w^o fnkF]A$n$匧) xjã/){Ȼݧ?k ܐR[H\ L@60T$ bp€3cZ\(!/061fhLiJE[+~3`hY2=E ps(pҍ>sSM4O ´gb/nڀ65HS7 G)Jf>U*wF]~/uՇS$v>"$#|e_A*n g{>*bRXД_-wy7orMG.}z.xeؗ`Qk̜+I]D[&%dB_u9;@;K\Y!tѻ@[>諤zŸs8Pqao@H<]az!Q)Xua~Z CD-;r*;"@`\#ir^*uB&Un/h#hD"ravM'u昚_"ۛ4d+͸EU|'r O%J59r*E{s\$vL,/E.KX}f v!t>r k+R[87{H_00{gEIXWnOF Hg0xTwߧwz3%}Hy$ӕѢR7ͧl#i%I} R7Le-|*CCor!KW?JfV:y&>uJR}zzcVy%r94(gH~9%@gM[ƣ/ɲz[ȼyXoͰ nC/~?Y%Lx3IG<.{gpu~mƒzȫ5Е|~ZU]}}۟-s^c/opԷ[]6Y9_!K*pc r5xEe{y/6$2"B*$ bͩXżH*I1%˜p}Gq>0?~߉`4fLN_;qCP5T$"_0aMjvOL(vPΝ ̾(KGd9OC'u~c dYr9{lȫ7f>^ }(a4|12-V*Ӡ㯶'^kA7&;\Y۾aå^9ϚM/ɦ(gE)n)?PW@Qڭ(u)yt+* n۔Dft-vԢ]D^wbAߚ=ڴ6qAZ77`g\[iput!^i˜*Y;f`tL`S3eeDX#Ԓ>@q"/Г5*baU[Ls⦱@BBWxJpIe*^%A|^ag=yK~4T\>j5q b$ۭl S,RZ &(כmYncLzF:77WWVVUUNmdf{u[z>.ZBf;)l^ _ˆ^ '2m2MD$ќB`GP}PQ~}94ḇ"Qf3 O 7?@֟vCR 01*6ժ*+D]Qb롐Rؾu HXGi_(ORP[0.K+E0T"A|GMm=%k\Y4*|dGg݄\ _ARn . T)%]:I7 PS _b0 h4Nyܔ6Qp/E{ 06OPE/<ѫkCm3!#fHlAHϊ- _a4~_6ֱŘe Fb, ws>mގ1FEpiNy9n0}nQzU#RL OHr6<(Q"1:E'9&-3tDνtwHw16)eTMuyyp-nUX$8Ě=@08Z9IA(F#tk .EF"+X8>]{џb'/:!- Թ뜬~?Ǎ+qC]pN"r#ӣF4㱃߯za-30vaK-V*VU[1(_!4\|"ѧItPV)5;𵖞6fKkuCw^ACu&|j+|oIk%J<ͣY eRތ~QP}ޟ!XK!X^:̟%dK;6`%еRRv WZU@k ,0S5FzS:VSPjOUڢԾW!!^'= s+0b,dPDHbcl rqŶ!*  5H3Ry>KH0E H. "0#H$6:}zV؂=]r%+8@Tp.*8n*֞[±L~5IW1@B>ƁP\0*6aG#MK 3k8n6X[+k2(o#ra]sX^e&?~LE^!q3i`WDžEUԪHdGbGӖ_JwL Ӿ攴ݫ*(q~-Z~{lnq%n7\x DIΫ[ڪW;ûąpJ8'G'O{o7$oF^E.8Ա6䞪Bl8A{(KMK>ݤ%aʊyBj8ۢ-r ETb4K6lB;UR#!>,SkuJ蕭ΓTX(E]WA!E48\ Jq4/'µ(G~!u,41@g,0"VSHwɈH&J}جæH*|igk+ k!-co,i~ojpĄ 1a=36p|3x O'O9Q%f%|ny'Ioe~5{=[o6u!Uioɧ\/#vh3K"0>$$ڤ %4veNJvhBwWT$T&<:x#plW~ߡT9=ג qsmC[7-բ Ñ0̦ji'`^Vi6zZǬ쾨=wpI srU|b{c@))1^W"ĥхKѮPUxѹ8WXX1a@]̖p6-EK3_as&CɍE$Rth١{FS8I Ow|:7δ7L`-UIM/T} Zj\u1:V>Fg(qv[~YQdnyu 2̂iDB4DEj>9N#:A_0a*a[lx6a$h,CG3)c־j{fJnrf KxGHjVd" ax՞U|hm_vQn=3DolDVWh2v_?LW"5(QpYb+qoZhtj|8xQ nū`O&Vف9gMF:,H&-`}S ^C\\zI{f^Ml9IGsǴ&~5ӯ`[EA:^',Z,IjҰ@'VCkMgh\co4Rp=È씆 'uFZ*@`@lx1 C*`9 *1 ,spLH.0hMg]9`HLq/&"$g/ad)NH P!q~x3sD|yE݇,u %V+À!g728iĨ1X[L#*]6HZqև2D\J$CA1{#>boPNqsZE$gf&YUn|9դ\(vF1CCıw{F́?k].B*k&gG=_|qCyFj=l\`;Dv3j{?*7- h9B(IEm33<@VJ!>>~|~lOEw'hi*\X սgfT4݀;LēU`J#a<)q"~*#[hЛr1*YzF!uUdXE&dxFFlS 37<_d $)=['_̧iIaDq \!n\SαJʚF!&>9`)N9RŕBhD(έNE )PM 3']bH$GZ[ U7i0-.R2a.f,w~B3R+u=o)!'Pt8Д2"%>czë̆NNWď/vam֠=s_]tI>Yߤ!HV+gf7~]*4zf,Q`X+pԏ{AgT-Ǔ阨g&Tw+ONz +0--bR0UG&'c&L VVKrCJܹnmQgуSAˏ"/s$fç}KE`.xkZn? Z)"3{*\Guϸ*0JVwzFϣGOmƊn?wȦqt(|ϔR/) 3C5?4-  # 5t\K?xP_LXĊ )8'V^( %&kpCA8os!W%9;UcpL)&!xMgTs909*oLrC2 e 6:=AgT@b/$9.tm>}e %3\HZ4L!/k>Tq`c}gGB'Ds3 [҃t-)wFQo;pߖ],]=b@,Ee۞H8}N|ѡ"Tp!"ؕCZD c =3LR Y5:&(`3SjEH Kg>V v4B]T EԆtd#Nlڳ/i;C*Mw M1J5q=VpfteVnrϦH˞+xi?o4?o7~lܒ|6fW'>Gr߷v zo_oպ'}#}7&8Yx<ם) |IoW-k<4Ms3޷mtN5fD_giOۛ6{-k~0dQJo$3h׿[Yk{ꧬ}f-~xAGBj*dJi#F0!v"ypaܾdcbn~sPq"r{2_kK|1+cp{sBc0e7S;A<Αͳ@i¯7E^a_w?ɮ|l`v?hXu7xrS.v䛟Msq mikGǥouo20j2euZ׉*dtv&+a.m>aӔ&-q.bTA!ZԞ8BwB[X(0"/2 Xkw Y[yPU?n!&R35ٙ~7o~9%X}; OnJY"'j ʷO_91r)HNr\Fy7fy;FDQ3כ.MQqx@ػ(Se3ěя>ryJ7Fqv1[%x cx=~5~ yL[󢙬~tۡTZIQ4 e2-+>^oHt;Ӹ .Y_HeDBs9 *RwN9**y|"d5,TaQ{D~v;@£( 2t'AKنF1Ro)v8pHt0#Y 11j p[ʹQq7J=B7JA.aqY7á2.0c$Up G47,ɲ4l2`|PT^9Agh.Ga9y堏"1=B3MQ)5g:3Y=3EpCS2'h̜ =" /F~~),=L(T6f>Q&AGk2G+\5sA^@ӯP% Tb P}=PԦ|=`x :Dbꭟ/sLs+ Pf (aa1~^3s_XVKRO9QZ%j$^%e<:)+" vֹݤ: yN\F%=i `NjQl1 _f7{خ2YR.9zq##;O`T9z矀lq8`V?}nG|h9|;6xssU G~][S#9+yЇݙ3/A 66to?l0T &WQR/U/Y#ݷtB+qV^_\6D):qYaۯp_4 S]pEQU'r™*1T%\yU}:!Cc% PRj"1XFLѦS Rpc)=-dx9JVf7AMI|Z4sbKG-H1r𱅇/Q~inP x Gk QrC.H2ٌO=bV/cD={|~T"g WĔ\CR鼐uSbFH٠nt~ϣX"u>ԚoȊp66m]gTv0C?N6eSwCƟKkGP‚7iwc$w'=q` pKI"E ڠHIKA2tkhP#h8ztأYz͇{_0Ƽ4nv*ov^ *YKBe/;.^~U.R9Cj0"W:Ar?u1qxݫvܟГa'W=9'~4i6 Ş31cW? 8݌ʞѝ~z}4Lڐ/aWwb*Y譺`@Sl-|cZg74O؟̝ӊDD*#lsů-7B([SL`K?2?=O^N?oP~j_˘t-&Ͽ?4sB)hch{a9S:+cݙb{ki镟 'Ζ!٢3WOTvi UNf:p{W dSKll 6567f#bAFl6 osAo^u' nEUFnu>Mn+#B8LŹ#}=+ur)=4:V?T҇345}X?%㏿~|/|s$O'^O6u5?uxkgnߴm5͛iZ9hW6#~[kSSwdI\q''zMͻ:a'm4:ft?{Dݫ^P/!1:hCrHhi=\DTG dƄ:rn7+sjT0;ҙ/U|?m=c39{ksB0nMx/uSli8N շ85XQ{\5<ɱ=S3$V!I96G\7!SW~AGAjvտLyfD3uC|Ce'ܦ'N&& 7WQEXi通jC)uG~)`3_8twyUnO*!YbD")IF+D[f)F}ƭY۴vsǩ|SHwܲ`Vj>RC|ۛVS:}8}9:=j_5v5+ Ah_Y*uھ|) Ջ> F;<8nams'Aem'S޶MVw'k!QA+_iFnfoєuUnHNiԔQxɨLDňڪQST]') &)$Mr[ Q(xu\40I9 :>F7RiR0/B@r2:,ׄʴjg$LH?;& zj {Qi)]޲֝-8]Y 5")ЩE+4ΘfSyF4DAwbmC@g  ?5qN 1W SD& %H)(A,AIEIT^JtTP]\渢pNEژ|&F@-Y8N/LY8ZY8J5x%T!2jG''4 :`,JѪ`#=>kMEk|e[y<$M\kf@@%⢎ДP 4Wʂh<`C4[7Aٱz;u-U}K55'D>1OH('3#Hk%|ES3 Kҹ-[x󖲪cU)ӸI)AJ1(TV;*rXYUPdU=/o" Uc>5M8/s̘ x zj v%(\Cb .r(1jM(NG4H]"m9j;ZbA--X՜9!ӌ%-!HW IccOKrˍn d)$4A':ZB 1F= gQ ՍQAw ?kA :Y+<;&2O뉋hU6 6@uFPz|]ՊQ}FdcDR\qӖ!jDJy}X'T '/qTK33~3h߭3{ǻTB;WB\Lڨ4ĕ Uj]Qm$"*{ 몾8ק㯷d$PܺₘIٺolxc LXS4/4{7;VS' 2|J/((Pm-TaHVGհw}WZ8DW.gyd)"2qxb*'y`NE] ۫l[;C:g% PRj"XC#w&@h)Q P)1Z k׸WP:r3RVf7.>u3~6t3_VJʙؖ H+ %X+`%yPJrԽP-` %X+`PJB V(JT( %X+`X[+`PJB V(^G.ĞBwT Q9^ Q;*tGBwT莊BwtuR/tGBwT !B53@BwT Q;*tGJI:{NUl" A⠆%d֒jFj8|-v1 -()k 8U8.DVL&ʌdXG>N<nz {+a9r;st8~5@f\NզU[~iMŽ"9P*FVG#$K3J1& ybPk̑+h0MkG{HCxD:mu`}3a|GǂO-~m[SL`*)\d=C@'ծ?W.#Xo02ø&hm %m$k(uH*瘣֪Pxwc%hewmGl-8/ERήrU}DǚW1PHJ_.,$Hv1L?=lF5VPQiW {̅aS (`11^nu/\G*#R"%1,|'fE.p0 (OLqc20i}3X)]G@qJ!yIP E NKQ$&NDp r;, nɖ2ʂ=8 rN{)NK´Hw헃mN>4oRktip8s}NRlE1Ii.zb^!|28}a>'qa`=˾2k4}+i'hj}g0OhkcDmjEC"=Ia$0P0EwzS3Id s Ϟ*q 5(܀\:$U\oa@|k|( 2U3 ,d'l`dkwdzyuǝ9)!t:FoA׻{3_յqt Nfogٍ?MÿO~*^>xL74+J\/a?^cKbl@ԆﯧŁ› 3w?2j⨶&7tYW Fj pTQ0i8n=YٿOtNZWZd;#bإ'~,mF:V*} qW 7oz>{}Ͽt9Xoנ @oj{4`zyU57*֤jfgޠ^ArMK-꾫tk J?dﯿI)>O]INZl~~iϿ_6{֟dnVDѪVK_,! 8nDnw]:@>$ GJG!"G3D0)ZASSTJ9vؖ|8oǝzC9')u K{}f0ah/vNLixw2QoM&^H]g^FzE$w)I#G#Mrc_&m8 a2ESPJmΌN DP̽؅TD$wOK^$%]|k$%>  6]Rw2~+ێm&B$E"=m,t? c̱|:M{ay$"kY"MXius=Gsz ֳ7@s&<3SJ;ƳP(:3f'?o_?NQXRsӎon)1H-h;2 t|N=y\Kp޳tXOLBp8%b"c1g;Ja8q=\Fr%l.NԿ1(ax*U^^˗e`Bzi.:h?1g:䙛/ij֓Md3(_-hcm)c(Ub&$/Ѱҡ *}p:~R?ز+K(׆VcBV3oWӬ4Iϛ8 x/V$(00)P0׈%5(%$Nj?d{M Xm&cKb-SP.'MxjCV(x5K^.-O"6\5_\[+Y1+@M?do?~7-UCO'Y}_y޾߫˚T]_ ZJ 'E!GW)pNQd͒.Re`M0ɮ&(l2\ ٟ~MviN9`Jdr(`,LǷˊv-9OXqc%Ci9ڛCv50FtS%99s>/ Ic`"1E(H/-&,B'u0xWi򽖀1S@(7GUnΒ띅oW0T:vs<䆔ThtX>)x^oVt^lWF㢌6 Rhm'?|%MUet{|Sl_MF$7`86[9-r͙A6ƒEۊ =lm',#L(QmGs.ӑaYqGP^EtX+D hNd=Yg~4S۝{;O>#\":AxD[}ܬ3<Ŀ<x 0z;R j`)v0҇Pg FðprR^J 1H_"_h@c];[d CN#8X `:uwPkC Tsm_-Vs$>V-FTK*D,v ^XC {WQW0M 70ٿޥªyb+D֛ ?s \XR"9;O<1Ηݬ0G U6"9 HǀDq-/ }+:J]1Ҏ5,~&@u{J 5{WE Bcv'wwyW+~4><{n=O/ֻ,.mcV"03* kL!8G8n,V[.QDȗks.=OD5uxxm iELxdkD= 7s ٫嘩ö|6Ƴثń]Z|VFq4lId3ͧ?r>`;0E'n^-&⽦ibUZqZ:vqU1~kyν+ fNG `< .PHxR8P=0#! 9ZI}yES^=&~"y߽"Yf.[Rٰ-j_sV$\`ϼG.Jr3In F,{Wu(|qb0-(maeKY%Ί ӃaxѦʠϛyzoDFa߻h|{!O=&Oe&#QHYu2LwZ D佖#&Z ih#fc?Y,"z>n8n&H k.(qCvYiI(J1,,VJ\r\ƞg7\^*3;=o%3tٳ-%%[k4 HM%pU\( ?Z^PG `+8>jK ^Ȍ ^`A Ɯ(lk/E9tjEosV|XTWz[N$[X_E#&h,F@tI L9aĀ!8vm e )vg' ʑ{!AqV yy@L;0-"*S[řAPI(A}!`ϖHLK#*#+TT!H% M[YkiW__ A}OPc` $'c0, B4PBkf0H@Eiױ@9&{lnN?#fҥVrhrcD5N]@-92F*{<jj頳:;޷pX)GR՞ P̽؄2%b'pH>^8(NuppLjekSĝH^)wNZGkbk+o/ 7QHQ00FHrR#qS8RF##DdC[{;"жq79LXך >2 | !pX DŸ22$XIDLA (FSc:xE< :Z]A[F7A'I:3W|S\\ؕO(R:D":_; K[)2 a <%RWǔk3)(cȑ]HO:Mx0.Z*}  ,|d'r`!20ƉViC0rHRIp{Jm9뇠pZ+n'eZs}SJc ]Z`i{`dGBQĄI:r(YEH0 2P0Y Xc]KXG"޵&$͗M~X<7#Nӻ_e=eVpɂ.Kݶgd#3"2*~s $`-!Z %41Mw`p9 ;.EBpjnj€1Ѩ'F AcD$ƀZdct)l(盇mܖddfh5 M+f^g}jzaei7zՠVt8EbX(ߥ"A~`IJ^0f11MrHwK똌fړ Mfne3~\Krq40'&!I o@=q/ 'zא ƶMsXoN=I|( jf0^VO VWToQpmO c^?>X8b'aX_`O3,9hrr/-b-o ̋ZYI=#TOT-D\f(IE,{8܇pZ֘wJȩ y\*mTXwVKC|挹^!B=&'\QGϤ"#L;2%:!_B砕C;pRX4# PA2 ,/TCnRZ' `BkRDۆӳv2TQ,cS(G\YF,!l{̞G14N̟-lvbN%΂jº0Bp|3UX/$vE4L#r&/$ t%7k(kq27Ѥgrac((-n@ᡡ\sYzZw@PpM[j%7oˤx[ʫo9}k05]:Qao0mQɴ`b.5>6=M" 0y?Dx2#˔I_(Wx467UmsLܣaAEl".H!>4ZpX_u^\k .0˩@nrgFad,"ЏhWg"W.}f*+4땗sMLq+&OA @fh|+x{0M#zUnf6IY/TFřLdFhT+m&VLqy3'9_~TyUbRwǷ{,QD Alv|V l"MZ[Yl,I` '+,|oXNU7A$V|QK5srӾj8BhcTπ6иL[ Zt\_-F fŰgc3C&٠swL(?LցӺFнF4ku *;6D(YO% 4F KVN\Pldjg# N$85)Gϸ4\l$qs%7^ |bG^B" qppe : JH`0l$͚?3\>\V냟 ܽ3{2핁G4>wk =F nlΫVmd0 XN1GiQ|_34]ߜ蛳D7'JΡo"2X*gTf<jͦx5g`9YWٸ.nS-xA܅'A'jS}wOYIx38oAr&Bu#:̨2#Q`CN[ۆ권Y״]aPKAVa9UR` H[njҀqU"H#qUi'Q߮uvZc %q2Fas &N[(17#" 膴엪SUhJQZaz#p3|q˯$ _y{?Eq1Ĕ1G2ܬ7Sąg_f2CNi?w?w r~y5=RBZcoM%bn!$)2 1+5G=x΂<^q&)Dkp: .\t(ӵρ!}sWKZXdHž @; ;{U\ arx#{#$B`T 9Īʗ6q?_⚞쭬txLv_?w~x4/_]v+&CF<?K骠j_f.;p(QR,H LÍd}0D놣| \[h8x3zB؟m[g"|nZ+e]mNuF"aoCMc^^ިXA6( :?,>~nww巟.Dw}{wjSA(qHISt7'C\utJ t_[2<ٚ{>}}7?q;xrjؼ]?W? fX@x=SodiV[>M[H@0ey]氃D"Z)Śa0VH9 `%t̷,mhc!ݾfF c `)o1-L9i N( [hw3fffO7YL'VtP;J_muZv `;~2__uI"! |m_} PZA*6JaCdCLc2''\p[@PIE/b%LANc=ѯD}~(t\`icoU˞YYrӻ]ZoE Hf{Wo&YQT.ؕC33H{ bTJǸ ^/[ \H2"˄b ;v ky^;͆'xWgwl\iI5\3|nr<"Hc" OPp: b"ɐ")0 ;Tƞ49<$zB#{*hlfgc:iad|cz q-oq&瀢cyG鷘jgz.ʹI/H rs8ڼ]C74΅fcfR냟 lэ@-Vol͜잫V#v6CMbveML"U! f u3 [ iw3tؘY~6fas^榄s{..1#1$qvhL[PqƲ%5]H"),'Kb9M=%x71"А%vW،asWK|V&v֖M]nd25"ýq!KA ҀX!RogݍQE[o$'7˾ZL^X3cGdTg Bx2 *iW_~]4T|gj!Eۛə!E-]ui *=-; UHBDTկT)\*`h2 ByؚMRu/=vlŊVGfNL)DSeTOK dxF5BQ}c^jCK̇m_X8I[`;,}FTȩ!"xCY+9Cye~G[HrQYM (%$A䁵{k%T3KRv8sq8<^V'2z^CT^-~4-֕o{-C z4cbM,-߫=ɭS{@{q1 ӸΣ1tlsJjlI}EⵇMvj >='C^qa^\GeX+1j=hR] )i]9r^4($|z]gAg2rXC9xF("Dg :Kc5FHxL$Nx 8Z!"3m̤\i%Ebwbrzr%.#Lx2Ye.1Bs\ Fψ2(mء:+5 ʋw-~)Om B̜aPخdܳҢ`틯3BųV=wR3b4g"S 1Bsx[G8CRG0D;Nb댐bbV#%g[VX(Jhc_g/ 8#@q:; $1ٱ#$Y7ɿ"=dU4Mp8`8 C`QuJ>Iv6~ő,eI4`[QMWuYw*ԺJlf]+#4` qAۯ0ygVMBDeqALAPnaA@8yH6R\ q@< KVRP]ȩ*>_aik#&KJi⊩t 3'G[_K;#yEMo%Et5FY@~ȳ =< NtbQij!҃EvKSI#2ӰD\i̮_alv!ћRLf֯[[y@xUG/7$sy hz1(,j(5}\ p=={}>?;9(F7U>&bTu (*i&Fc/z,歁0yK2E_"-$Yz^B~ Q9h6S45$4(MJU-C5f %MF>aĪm&:ʰ!Ǻp*#>GWpL`k䞺j߲O 3ϖQ4*ɔl"L Aȯ0H8y.Io3-޺bsչ-3Of|jHj020T|SW@fY#v}nȩ+S\Y HPۯ0yug;9ۑ%"eZ5e2޲洄 SOlnо hoڷۀm@6}Kl7XKE5,TGf.J 5f,jao])P֠uD(ShqaSWM@1NDޔ$ˡ#15f 8=V*Odfwd] Rc(R3'^]9S5B^ .v#X,pYalK *F+ 5 4[U"Nt'ʴ՚1jSsNZaO-@ΐ 6FSwcp.3f#/AئT.;ĤbTU.6X5f ?fyFm$xʘ/ cu.벀 SOӻzTE`ޭRIB| #n-ssH9)6bA84e ȇ򥭋abؖ/m˗KmҶ|iK0ǰ-_͡ܖ/m˗KmҶ|i[-_ږ/m˗K[#p&;Ɉ=h糣|_odկ)/_o,WDرQ>:_Nӌ{:ǻwge=@n/φ:e׃S_NmCokgGWAExu>dt?]sW|s{e&5(1[5x˘mi=JpχovY=V>*J=>}L YPoB;H/"] ? MdIEU74\mb?~<$._{B= Ǝ5U(KLj :4c_4ψ̬}r>o$Ǡm(ֽu*[Rc)i1ilG w)MVJ[G_FolůzNĿ''g_\n:t1c咺98^*calzSw*j3tp$\K$ꦪ1lqljkU\ZU=[ y\wR,m2X*ަ!HQX/VugJ2@rM6IEK5_D y|Vr3Ʈ<ٷwǑC?zZtvOzBx~7AG|=/샋V|DKb J0&J^J0#,TDT 4*'ܚ#k7vYSQ֋BNޡԪ2Ψb]\F}uRj0K%X:FN u59<~RM nCBPoP ϗ.MC&A&k2Eb6Q$x#Q^-[/59Z>xU2fw"FWH!:rIЇ2)+=B,FFE;Br!4U['9em#%ЌȄaKUͧg" F-5يCivWZG{?Ze9P^xꁲneC55gϞ>0e ^`1[UK-Ej%Uus{;z`W_CИPjL G^ܒޜz΢~ EF̻>|ogEE{{,g$og_9g*ݦp58=quQ5AbtCx/Kܣyg/4 7*I. rOIU Ls4N֓Cm!}ikԫ9+ : isl8Rc Lvĩ7 3[l*YbRe!M }VaRjKjCHV)"E2"rl\+!KkD &$IRAք x碣j'2Q.XGgK}"s: =0KR’>rJ2Tvޓ Hf9@A)% 䛥՚j˓-NM)Ƙ7λUc [j,]m% Vle+iVEEj;+MqLQrj"v.~xk#ܷq1N/{2"= _U j]JZPC9FzAD;MeD^x=]|.7EWds·Br3u/q=͟J#1$lW>J{5NtB/8V7dowٰ;kwJbzVUMX3.TQBX{e N=8~'_I;Bщ AjUg^èP@r57׺mLB)i G>#jN6q$rݨ;kQ)#V 850?4=gIo}V~$ x[cQJy2'3 ^ed.Kj)J,|̺9_ zm AYfA9'‘V'Zyp̕;JRR{Et՗$OM'XXA0Mu/x:z`^WP7)+[DctTE W0)(My3kCF_s\B\"qRI8H[i Kl,`I`qB, 0ƉFσ%CʔFj*MIoCP v=H1WOp-F$%쏆>8R\-* $dSR(a[QւMeEH4M(llDXG"\Ls`(-l6E{أsP092՚9)>'S`&ݩkaFdHOȗը}\3ͨ&WǾdhπQ<W5  NٴfK~| /[#qgk*&hY3_ӦؗЉg&wm N̍yBX}9--Nj7Mpak-sgֈHA0_As\4 0[gdpvTif2F_47xi׏3%7;ɛQȤI"~2F= yZ&SJ<2Sm&X 73#ܒ^ ~B͘؞QYQU2( dl2y5Op"~%Qe#7y䢴0,L cDqKQK+N&cj "{4LWĘ蕐28sl].5 t1 ӷmAdf=9ŭF$YY5Ӥ$ӕ9[ߧ L'a3Y ]Nf-e \^wwSWMZpZCr'όm?1TͬW逰 ~@ Јc.*5X@UmTn+εLid/k>G:T"waPML:>u:o˝[o~| oK=7oinm׃:JH^ shE $ېR،!9 fF7ȌBVS9tb'%m9d65A-|B ]r42'%wGz$iqٛPf JF49kyHWs.%}oI%4a/,*h)ųI -c1[Q6PJ7b҅@H8+ʥ4^T)ǂu55yK(a`k.PH^h՞:#,RFqQx,qOA[TzEMZN;7Ry*ae-yѻp)+ g2Ɯl1jX\!L27sNS@H|#(hjSXy59%)h6U;8S%]Z_ؠ/`w ιZS+BG u" hpԮ?#ӣ'o2Oѭ#JnWze8fZV+L!l& ] ! {X)e~OHXRbS&pZFeZb"Gg{{ԟi95/DSt.sCΟ`1}(o22.|:M:ygNy O\ޏLjڶ<ΆHC  \tbF"r5BWRP2IޗzW ct'B`prTarz\u?$) :ti:P_ _:#Sngs*e]ZiV/!Biorj} \ 2݃݁ @7k"w֞i~- >{7ӻ 7qbd^nF[v DmA3%q{a-Cj&*gJԯTO#q4чܣi z_|Nw={CVZqUI&.,t$MA|]~Wkk|;M>V*Aį݁8闿ަ|珿wp7lߟj %b\Ss0bjXEO='xyi+}>bEiK떶T Jw~fN\Kr3RxEy%nZMͻ z;~X6y5QUQ*CNߦ+򊈎{:#% O3D0)ZAU1R!xk]tN,tm~󛓔0I?&WX( gZ |lp|7@T9$rY ޓӱ0Q923fIޏdbOGGW3dxOv2X-A(d׎#GZ.E1Xe#)3 .t5zZ93/gӎw9;=""׮j. IH:;NGO<`zJ^^ AVFn.d e=aTM݁k2'{Aqp p"2QI& 09`*B DbD puJ>R}٤6ŝk-;B]3NuV`$! uXƎ1F#kQtY+I7ښ2""&FQ4`pHparjӌq8 z4~/c[Emx} s?}fٻ[wBPsQI.FVxS ImGNIAHb;h5Zӆa 4wFm6*GT x j@Ca nl95x+zz<*ntWA6hE&* ,wٳ*%H93uZ.Kh]?.ֻvN{:]ٓ~dwsZвl;Ob=z^k[R4JwQfOc#I2m=7NSkc;ҸD~۴ok KW>okg}Ek_EφV\i~p N8@UQ9?܏&▱(3,y LqˀY,C2Ncʠ z)XWgHװ5A0ɵ r B($R f4j|LLjyZFXp87 =Q~tGwޗ_J Rö~p L[z .Ӊb`)Yg:_ZN7O* zEJ6j4 ztim)heʸ[jCX3((dQ S}2 ]Q/hn1lyћ8J\?FzUbM(&ZK^j0#:H*0V]_1y1\>yЦ({,1>|XxB%UÏ"j5)~-磰(f.(krV\*R?�+>gBn$1YOxn9R7EU x&73 b 5-2{Jm$گ]`rkj/yzυ@m~&f^* C1=4WHם.mݺ`Uuf(g6mEiK2cRiIe\5RjB\}ƨ]df=o7_7'9'4>F0dǢg8q(+ LK &1QfK1 {OXFN!K!L[ *D5.m:"mσ|Tl 'H Bx`#m@؋-cƴt"g($^2N Ij$f.7XǦ倴iNF @FUly^]Cy\Fqp)1E|f܇2=M*.Χh|wN37n]-O@c[No0Fd-:!98Lsm|#a9ܙ^!OBa+BB!x"pnGp ,| T NA'lH$c'!y "LLxq b92v(oU%#ė0QlH@6_B..$qu4avWU2St<"ոq9ߧF /_ XoE_>rw_&_eˆ^FcD2Y+냉KM-a$EVRD]٘>ng<Mf˶loxqmٝ.vϙ"k/%%DJ,s$#9p MvgO?}QtazHD-(Uh>cPǎo \V-9|XY홁5A14`,?7L"z0;ɔ#Ljnydؖ*Od&kC%L7^v/y$->hj n8]Aou'ʗ+$HИ=\g#m`6=6n3'&3^`Pz=n:6xSȢj2cgMҶҚNDNP\kC+FʛÉi$7˷st³hΙ_7ޝ~aLi^FYΓuA0"1䷏JBu=įe_ho@WI.i\_JuH*U}ҾB2"PY/dR!KJEp{wh3ا*2*!G9CTNvn͟T9Kq =6jPiBqD'< kE@hg1YQHI+PR2¤ ( e5F$i 3g!)fyfrzU_s$z꫶@{y`}m'(׉Б+Oگc2y}'ؕ;$-mVr{G(:jv!"j$A\(# ˜d1zK L9a`y8͆Wg Ƒ{X8]T`&cZ-tE`Y3CT83(* %B#^%B"`Ԉ H!UR aEF8FNOUY/CPS+`$'c0(IF #EÚ(dT8@ '/I(z Y+ Ȋ`4L91m"H~# q1\|[uҰNZdžrD,X0L%@1c"ʔQ{!}`;׃Ȣ8NBFmt.S>i۽L?`֩ #"DyYʔ:{(e(MH\4kHfTU) bdh˷#=CS`l"r(Hf|=!pX DŸ22$౒"1P#p[@x~xJ[;6]3‚D=]|Xվ'z_|˴X?V336_&XPMaI\L' #9x_^K< 0*a0 hC8oNkCe+~+V>C1O@vk~U[?V'E}tw׼h/)b,bVNE ;@ʧm=Vm7U*oMπ/?ocñ2~_a$WXb:A0Ge=?▱(Y. by$KJI1s5<e,!]Zn5L:'1beP(AHͤ J,s2J@1Á! EJ!^ڠL J  R VvmV e( ̐F!FLC+f!DTR҂ MlJ#fHqGjwۀ}  9|Kqr\7Eb+Grm*0EQeky)XbT)1l6?,UeفkxARi e[>1X Z-y2ųn=RI9puՑ75 9Ĺ"F&-}=d={rX!Ei/G?yO~ʦe*#|ē}Ozt܄k)?VSIlXzb2j]5?8??A}}n)Z>zŇ_yo_>?,Lhfq,f R\M"_ӊxt;/\NM-¾XaNZ;QXBWS֗jKV,rQhzFs@m̾V8`E}K\" I$)޷v2U7/yðٟ/\bn.h18(APou֕xM83T%Q$1 %_Lfҥuw?tu_٤c7/Ŭt()bLqZYvJK&%=mtjCB0CbBba&_|kTȧUa%ΗsuoНͅvN.d}r W]LImwibY'5gl ૑]չi&v'>Wo ݕ[ @?!y ?^7bwLz܋eypJ?Jx =W'g]zj ` `a+ŏo 9^Nԣ95$]:ݸ`>?O_}'p&Þ@6EǫBhVyk\9>/{~rrX[(3׮KBxwm{w^1;Oa:|4;:7fK^FrpJ:X.՗<͕{#?u296!NPHJ3G5ia]%J/AMɖ'\}iE>_\yk_Ώ.kK3Sݻ 1Z.>neܐf\.}mP黤qБbhP2(E#JDkF7aN E}>oan=61= 6ۑ9~;XA#>0?_`ި:W>s`ƽcb4we)hS6PESyvȉvczuǼ/3}{LY>%˼yL$n(: , z`+XъNfAx3_pwzg3W>0UE͈VWmw-=k݂w}b&aٿHo :^};J5 P'{u_N/O|Q*jp;sqerµrO{i{m7gop%Xk/?7Y-Z fo;ܐ1dͧ>ѸF= i/8DzV/^ fφ2K=iθ>s#CJS[ Lߠ]@X RFB:9F01KO;߆|s_hm٦bWk%,H7{%]2_3r-ke;Ypg-48>+ָ6}]>ʧr#$#0Wn10s r7/? =OgM{um NNznn1Ood4Kd($Y};?y8zن['Axk;df{{st t+kOQ6 QpҨP5X NN~Ѷ\-Rt|ٱuժ/ GxOM"*Cn+V^ov9[,_]tv=O#{}QХ6J>Mjt=l*6q+c`/ϣ/wX^Q,dtcOA䗋=ϼG{}풛}},C}uTW39-ޙʖ:>]Yu=˩dv#߮hBǿ88kn|X'ԹaNd;bxGk %A,@/FUbEܣWu- +K s]%,#q8 ~^nepk̦2ɖd#4f?KB0 *%(;\r0 o^=R2ڋBP8_W+ϏK.M(+e䖩x8I+Eǽ; ۼ*tw(-Rj*dȿ-o])W 4)rl^P3ឝ ïUGy0} lBQEg~-g#A!yUA2l eG:&xyBI )q̮A#xKu.=oBUE%9UA%Y`U31\8R(ow}))ad+HjXB6K8hįA[k7iYڈj`KY ZbC_2<$fN1RKC E<#3)/f @6drUyGM%fR\|^Q ԲiP;JKD a*V B i`EF,nQx +Uh Bh^jvq樌 ( Z,T|պJ7%, ^ L EڈLt9E ul0I*Uփ) ,>I1?%ַp\Sne*USZbXǀveBl\LqJHVG!*(N%"fƿrt*S Z@e;Ϊd2 L fZEoj,PB]XB@D͐jPoBͩ?,#W(c)((ڀk $4REx&TM46&׸9fʔ!֔GƒEq̄Q!VK$+dsPI1,. D5 L&6r/bڛK(Sad!] AQj '' x5{RQ1TsqZD^!Eտʐ/' (JD A e`,k^uԡzwU+cZFƅ །[  RYr2|ḇ~x'"B  nf إ1,=.{E-x q%#ޚ @6-Dd-D1^RT8xP+~Z:!K`"'I#b Z.p1%Op ;N`GœE 1ʃZ@J#$RF N+ QIsptq,{ѴyT/A %-}jcԭ W ,:?P`}*tUĝUGVT%gF]V3I|aMvoj.ؾC`I'D_bȈAc;K2b:EjC9:]%|LaX#tKنLAQ,Ck%lыed(յc[ kR(d}IPc` bRə%[%D#ʉ gjzW`{i{*Na6er%',(%mʑܞCyXKi < KжS(f`: fYS jƋ`ұ.DS>%n=X:LJY2tK;{vQ1Z#U}E[峨åA/h0I|pH<MLx׃4\ DQl0t]% ;`UFP0pazUhGZ8{ka ?*^CLGD\qp= eO^H˥:t H\-'*lSf C7S?&\+zLh@r+B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐B 9,䰐BrX.ٮ۵l]V^ps- Vꑚw\5.U˖<;O&vYs\C35ɻ4}՛=Rmi\pWQ 0'q $)-/5lw ?=1Lnrk(rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ"w)rȝ\jZdq<+K /"%akÞ_Z{sL4rL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1!DŽrL1XI^bA#h_<,(Ra}]Hq6|@V2~5>I"s2e+ɞ+Sf,IAEDYޮxm<.ϗfiY){C*s㥀ņCMMw{5n0hX#dkydu/3ſh}f\GnuY>M/׹E dh] Ɣ*ku׾ MIH-7YerrKrAC AqyKݥ2ɠ֦:8]ڥb*suY*9rsԂͨin^͸&_h,0ɐ 9iU6G ,fy*lH*y7§vg]ugzWfuGQS.*%ˑTaPV A8c@(otİD"0}UKtH_z; ^KWYxҦ s(*yVKTRDH%ZݹZa:yΓD?.F۟Vu,;ʜPN(]^9k΢Ȱ}at%akX{aATHKJ`5咜%3H1I՞f^f7i|yO'c6)g謓{lZ$F7˫A?_wt~xZgN*KPRL;hEWpuNR JS* ]Dd=9묋CZgd_q cоD(W /Ć cY''$ђdOpAQ(Q4c.m첃].~]iqAIS "4XE61z~ϡku ZmVY#^ϳ%dXϦAE(poя~_%B=Ɠ?Ru0]FT /y$sџV4.|lC%4w五ΘJ"+ 4Qٝěm52.bwv^K |P!7+mk\D )O9XvQ63ks9x=V -SY !eY.60љ8;7 x~˚{z~۽J|xFISy|3R*ޟ:Ia'"*I.ݜ?KoP̫oeR71חճKw8v;ab2sTFQլWFzph:y^I}DUKoJGN{I~;Rϓuzћ k^c}h϶]\V9[ y (Y)pYz2p&m1N7)E.6 \>ͨN۫ozؒH^dOxmqW |F{-Zw X k68cdpߛ?OJWxq|_Fxp٥ׂMWś7ۺ3z\KJ(~@.⊣[U3k)-f d%~(V2F[ Mho~:gp@x,R5 ޾Lr"q"reH+C c"&}ppM"yAs$`+/4,9Ẍ́Uq{!&uZz/_ZF\ zbK(/ 6Vv-7;O躎U ,Iy.|,'#^`&__,ԡ:i-3D|ݘLp}ٙ'N@F 儝\*M&>L`/OVc"e̓!0%b:mvq˦Kqgh~8^">òu>rJ%[lwjC2o%ho?zu:Bx SʞNYpo?=TGf0V\J㳕+U?6i~v9u׳7al)̥?̃EӶܮv-P]mlZ-pe[AH۝$itv~M[oYށ"46 ZbGECOV_w=Vwe.mʈ4+%%̴Бfơ\z6{eިYa*zEp/.;闿;w?|_|_o`wޜ5:END`^no~֒vuk܈]n|yp_ź|Vck@80~ן48آ`X]WfC_z톱_A/旨MX*\ڄ!Ņ/W/xiӛDznjKb p2 JϼP)ѻtzY{7>fZwe2 6JQEXNEQJP=pJP-#S璥EI ZY!EdmFEȗ]{H_ȇ$HN v0(ڑ%G3?ֻdmI6&",b$dB{_g8!˭G My҆n2T֣a mr*E}> lNȍB HBö+DEv ~Pzr<s}ųʚh@ pE,,0x /Au)yê=w+纸> {LUAۻGӦ"EziUqQq[Nwq--& 8O/xz ܝP =Ji:OW鵪WH.%:FrvZb_Iұ+1W%s$!Mg{U['_"yu;gu]wzv@|YRy Jf/ʊ/mT2#s*U9\IࡲW@:sv#u'f(,IB0ZkH(G+TL Bg;09۝1CWk;+qĤ96nRKcDuy -멦y_Tèch=~3l㊆VITl.BJƌ`zL%{.c/8,y*? ӤD*VYXFw)E/9{ǥL\d3CG hfSޔ͏ٱcў>s5+]nQU6dX{C Mn'*nͰ^YQI{ףOZ#Kgv&r#iQꌼehsץOvrFd!= u=Һ=s8͖}uԲl&yx{z^iy=O7.˚ n .Wt<[yntݙ* 9\jKbOgwB"_A+&:B+:~*DbGW\?gU2̊]%AU2*t :xŽ*1,]%N*t J]%AWɠdU2*t ^ %,{G0{Gdp=Z%z*#<\yIKr]%U2*t J]%AWR˥X4 ]h\DVnSP08XB;iltADG=tt܏Jg:%5\y,5 wQF–y䒤>snh' |S 2*HgF+iA g<y%arO XCA)щ'qdvŻv $N:4^Ѧc},4HLB ֲJSj30` QeRd4>j 9bQ[&9-7߬ZM I`IEWxO" A6/ w H3& ?{XY]/yYr0?V{IQ(.IuѲ9&ȌƬY *2L{RHhӞ+1$YiquAzh=J:&GCNt,ʖZ#gǤ0u~4]G<=kE(CEGk܄h*5{[򈇇ؐ6CT Q)1Kɓ#/`yN/C%/ @]OOmq7AZ{sr'Iz34*0`&Mor?ڙ/|<)CzwdOJ,<@q{3&ήٹN&n&״ ^f[͊ʖ'I1:s+arN[bH?|WXVQ&Aw~!{u OwF2qo7חWoV?@)֮b> ;!ߟ;\r\( sǥ*55w9ZKZ>}S໛VQ@KAb[pKޮh C]pA~+kIƑzx˦aD07Xޓ 4{qpvreznUͣ'4j\Sv)$32RV|:̈́+6oAEwS7uiMcY8} ZxO??#a?}GΟ)P&$}/O$?uokho>4XкYøZ9q^YlCכ/ I(Wnߍ.JiWVٖ4v(@3^&1\.jz*+TXo!t)MvJ_CV1FO~w4_PRY,IWĥ:!Jb}Sk Sp>N82z~DB0W`j}.b]Ʌm2C4:N+Is6ux5fl`;VtdOuy=WĝIU툃܎s~VK 'X,o|[gBTSf-U(%E*&rj/yw%+L&=¤ғ FRTQ*!x Y4Fku5#,,V.CR֒4I(D[ā:s@)ZBx62E̵CFΎh쑀xdոnjӮ'[QLqhL\_L۬tzɋ<`6uFgH&JuBN7ޥ\9Ph,G͓6vhwP[/69rasBn62AÖ%5rKz+R8*7NV›*=)%qE,,1JP>8"?|p\O%\VcW3ߒb_.92:3YıJGg{\͛iI?.U(\r'\3TDD%BvJ56jX|A!{xfW,"M5^qBb3^WOWU<< o?z t%:9bXIr0TbLyN }q%sDZP v2YP>{E ^EE`2%21 *)'BUI S* h.%z︔bt&sȑcx[vڴFΎn|LbYQԱ A~CŵR:ZCq;t005˷tY'[^Yt1){ףOZ#Kgv&r#iX "';9#2运לWZWԺi]M[o9ltfKͺCjYQjZwv{l|WZn^sf.k~4̻X_w|$-3 GKs>y]4z>Ϯs|{;~aas-9%6,9v!WSqlj5?vvα:k|ucχt"dQu~z5H?'%:FZZr|%aU)~9!鬌ك iӭ~NGk9LhK)ƑgHA ޹ &&ڞ6/> v_. -%Vڵ:M%ep2,YNeŗ6T*9L$PYڏ+MIahkJݗ}r;3%DNXٓH;2`ґ~%PVdv>[qK;tCҾ1q; hY/#וN< SrjuPrImes4F`ƀ*B%;"TrDZKRi3Ęi6jx!,IP*6J $EJ (9 :8|QZ#g9d-S<;=R+"}N3Ev}q j4ۮQpryu!x+D6 >:10g,'l S_TϲBeUcg;FB<8e4+in#c\^nf|nh;r%т68ea!" d)PBU.{̷pO,q:ZBUR_xC,E.)KPy9cie h$k `I3@lѸK/ucRԸiQh9R], Q* Qޓc4.J X{[~-s5`[ͻ73mJc , W\z19rk+rnd+;tEhIO押2A%_ q3O%|g Uj]QmP%QqzߙݑXu~%/Yy$ Đ14X\$ftp8Ǖ`a2 &/M9-Km)7r(@{]E8wdxlCpO~v8{yixt~*qsYdWj!+-T7ᓼ)DH,"E gsSQ,Mm}/XUKe%z.|BID` ܙ!"M\O!T nV<>TLXMzq㻾.ʛ& ׫'׻MTLTO #Oӆ:BfU0aT@|RW%KYe-4ڏJԮyt ~Ywm8n`j!0'*-G5*O[(y;ζ%4dn #/%4e&ֿ19?Yҝ6\#;y nr"p6s kNX)"gEV1TDKN󢔁%n-|(ÈJE2>9,h)D[Q 5PlMp [K[jaƛi]Mt;[ 0k+U~ٵp(; `5aɖ%~~W<&VS1 /EXrwuH2 B;*$S_riU Jn)`_¿Sޣ%dUɻ׻2/OiwʎFKuTp` 3ٷ,.$LP =&10L>&.BXRYﴠ)$`"ED[  I⑓Gem }!c˜є'~IRQ.8!(Q{jOJN&%0ְ#GZS"vP,WuVeߙ{lq.g %%Ib ͕R$skz$R3I IAo~Q,[[ڤ6(f{u c]'lס \_GhrsEB[ >_Z.o`k,K.?N8|1`)kLڅh. &eF2 ,Z#U6&b9|mEW3- 3?WE4*<9j"vmwz){awLɇu0MN*H\@D >#hf Jq&JIaQ娀3zP2(&h~ǘeaj֫ɦPeRK;@13Yږq˛^f){-%-k]]: It},}`#:{=3C)=qc{Ozr3sn:_Bl\x&-=W. xq\? |b՘-D/yo{^2֏x씥J:7;ױk9r+MOo7P'o{ 7+ $ev k,!6:TJ4 ڣ"FK8p_6/ϙ1'\)& ,m꼮:o"RȡB߻^A &]b!0Tk7.T@44RrOt.}QsNPs9Ԝd!(٨O.!B 5(DF3T+;i-EC)/R4 :rlcE0YV*Nx[{H?:KqHa\/Ä-oL(jz*{pSjG%2#BjBPRH`CDYLPtYT\0`4-=gׁIQ 4rHAy"Q0>Å82EqfI\1Τ!)gǤ",3h2KYN:FtkxakNKոjvfW)J@, 1̈́L#F4DA1z}_@>QP8)&z06d(FJC(jZ~>I:i ގa+*=8!PjcB[)&8% vzazQDqtLuxt'w罈fvDxR*q*&A̔Q*O2*@3LAJh+#m2Uu|DXcK𵀇IPI&05md,,%2BSQ@*p,HrXN*&!ǢF_fٰz;jNċT-k728vPǥ(5n'+L&p/( -64\JZλ6x=U /Qn-2 &{b'σE5nnTDl dM"(+m%K10-j>1˩U޽Ĵg|_.ό.mD$NhIPJJ}\y9*>ogdOyvBΜ9m8bQF^d\Vkzt=R6P{7~_~7_?2vMu} 6r$LO1kCg/mQSf _kKvPw)VBS9L;SQW^驥C%/&fXiaIlh6oʢ._u/R|zCǬ?:@; עW{78[As?\7&JT_"FSgxozO2cP^Cͩ!8!aC!W_vNǎ>CpVv͛j{&=Cl8Wد[zwq[+~FnZNwx6mxv;}.7 %~pH Zkhum~ SbwȪđd'UOHL6u)) +WWfC<5'S"k0D@$9wX#+cCrJXgg;l,uیҳ)h~)3J%jG:f-ך";tZ $с>ipTȬBDQvūۓmbjxǖNvhrY _if,<ϒ J wwUbUIrDfVDgiUA࡚OFicڼ kx,8w_szξs 玵E1aJ*S"Śs;5tDCt:ٙo8f6z {WƁb|g9kW\Hbd[(@/P-U!i[ 2Dj\ 8m} YKWSK z j.\r2 \2mE $k}^EpBX@K4J[E饊Ii ]ֹ\"RHF%S+'sXsZj\.#>~ s~;L/3'dO}z( 1Bfgzɑ_iy{0h`,ƂdURY^.ObSeYGJ,ɕj[R_qMvkÂO-/={h9UȍKS۩mt;"1o̅u*f2zi:o[.:?l/H#Le7M+?~i]+/ܬeufnΐ{.:?zjYw~'s՜lh׾*/Kcc?>?j\zDq48"cOR80c=Z b9RrlUȍ6L5!Y JFr4$s-7?=oI`q`L Z¸,M FJ+:e]e'Xl ;95< U~d)PWK]ooRaǧú5ESc̝ƙiƎ,|8KTgI穀kź*2un (bQ^-I0MlRAD&ZI笑2u ȼָwP;f=R'mɿWm+b>85i[dx:T$TOP dAE'mtOz7?/o{>{r?$0Q"!?C{x[JNQD@{#AmՒ\g3R>]2ƛg45"T~VDlQDRIK`HX8!>ݷ[eɽ_>f-0Wm{[avV[Z!&Lvy4whH̬O[5y)^)%GrÙ>$ɖ\E9 zus5X9b\ItC1]ڜk9W3ږ8-fr[XM3--<-|dXW4yc^8NVOd@Si,v Vf kQ0X4M4 N2Sf>fE;`xO²`p )TB0E6jb Bz Y$*k eccrƴ-%wY.@ϣ F&c.!C,O50@GZiDE#X".,OwSTDt0 zU J-I-:{)&)w]k% p3lYcazkBQRӧbk' p }oai{qp0 @2I>~AR -L*CL>K4\8-1zP1(7zL zXvRMڴ&lp炙V6`z:%iʝ[-~G$,# 27L̘E.G[w]Yݑtw7ߦ[fNYi2,+auj! b~1;bOdڪ,Ccc%v֝Tia.B`'=ڗ}MgnuE^7P'o {?.ϻ_Hq b4m3 e +&gJ3 2-w4sbq&L|zx;VtҋեMgoNE7g/y %uu<|asY=Ԟ LrG >6`m@i,+lg*:ӥFMٔk*B]\gLi J t1#d8ͥ VPTd_ͳi ,(F 1VX3i$9,h.Xz𘓱jYMCE#n>p}~ҴCaFg(ҵ(-(a'>D& lL/xJ!)m)BKKp6If\X Ig+$E~T\BRI X)]5'!A Q Q#壮S##% |&@"o R1 L"jӱ|k rNH?8RGyt![F .gAjs[՚` G٤/_Pzu FLc*ÙWƆms HBG2+gZgV{UTWs'9Y\%0ɚ%FY i#@c&Fc0vb)vc [Ƕ߆͠~;TہN}A+^L"ҸHk7,R.4W^zJq^\CW|CCc  oќ C?%Rtf?{E\|g5eϼ@#_ /?MZ:1˺G?y6r;z]^dTpYv++þzƛ=b)}=|e{蝈Y;N0'>wM\N{m'):wݠ?w;sO&G/eQʿL^?'Rd19E=$r6#*4`a[RTQ*11:WApk6Yg'3l<|O)L2e ~:'au>!) 0m2эq 31MO>לrCN/d5XNm$ycJ(IP>0`I$V$4—yիIԿOa<F2OMx_TiQ"9UB.@։HO3F葌+M{  !:g#WK̍6zO]dSޕdBF%?JyW~Y3=N tӤ< ?Fv˿///c/ӗϿ~D\sM $M^:=Ѻ_UORMg jKKBEs7M&(tk()52&fBW>6WRO  4q`wс/|Ҁ "봫7ޔ5MBsdWlEmѵMiuNRaئH!P8Vg4qhx(o:l7퀨?Y=O^yAY3UմLUs1&[[[[QR[ +x+WoVoVoVoVoVoVoVoVoVoVo֒Ӕ\23['[[[եzKZ3Bd %+\P,ZbTVObِW6?p"b6nrfq  5{Kz؝:`dexIPe HNl@x,="ET NUcfgR H|~"9*E dtblONq`} h;NPNwFmLT3s/{}S ض&A,OF)O^M h40$Zǽ ,$M&v oAv훿M[p:ۯRPZykWۚ9M:sʺx|sc5ڈD4z6f;<箯#@#6 DI!=EQhMAq I2iU4m`A&5 R:魶&H]Ҟ-U1[-,.Y@']*s_;[:| so+%Wx#)rC\Q#E+U~U 4~~76@I!ywň@iF9yRZx*jy3T́c.d8ݭn,}V76l_dՉUC3*L͹&Ϧ05+ȹfj{ajt-L SŠ*fKL0e` z#ZBIi\PoUcLLT o3THY"!(OH1( #gGPL7)6|oVvyef~ȸw7>.X4|FL!LDe>H C(HnV2/9.z[7k9=g}!;d#Ηmq _]zJ<6 3\05aXo,rT#,eh?R2(]ݝt F ~\tܛ5ZקY>_1~;;.y:59/ :VZRίEu@]o'|o:XXm}D<bTPD9dC JJJYUtw8 #d|Is][ml*ʍb\rEjSH/;\"r^Ӛ^\O:lyZÖ]nǷ\+ЊrURJ50#NDSEb6ȹJTz)(2YʢH@+s$'A8ezݨTHƱ`c) b RomB%UZ3#gf쥜D}u!+ YՅՅkHw9q%S?o4v8?vd %f|Zִ#aL3ጧAGm4J-tYܟI2Fs69(zpMbDa W495Jp];ڼUkWJNK*0bC p= @L8/X ģpDemՋdYU{;XSB@ 0$P%r6s"BxާIQ**|.*!c7UҪUI;@( rb 7լb+DQL0Wg"+乴޷T-.~(9@{hV]]t\^&i^)(9XZEP㸜 DŽ݄w>c՘d`;ǡc;?;e]c(Tk_kr ]A5򮼨iq8JȍrK.zCnJd$$evJk,!6:TJ4G#ep}ٞKS<'&g0w}AsTm꽮:-Dn=2TeuE KQB̀!o-;o\hRFKK#ϕC5]VsWEH X;#Tb/73JQ\2B1+d^ tHhjxRtK3# [sROr](=0 ɘ A& y1ˢ fIJ+Jn1P~\20heOLsL (Y #DGUeʙ%r8Cy1Rώ", fR+eYZtJȉ5#gG9kw f,F5$G]D@I5hTpE);#JO4ʈf(-$IUJC_z=}b'F;5Kh ~D/M)J@%%RjsݰI5NZ o7B0^8!PjcB[ (&8| /G饩])kn=ͻ|or{;eDxR*q}U\M)hU0xeU͸3L-6.*% ^\Fi&WݛS9Ⱦ ThHj3Q z,YJyL#ĵ U<ʉ]8d_Ii6␦,S_\ϻL϶c~vw-H}vv$̩{1X` i1૬-\]ߠ)+UvKHf|_0[PDd%V:2U\'Υ mۡ5eh2qOΚv ?lyQZTT+J`W+v$x :5 :1 :P2ȠcХT|9 W[ёY1DT0%B.C6 `i8u^;8rIrFFF^ Q d㰺8=ZNk[$~}dw١E-RAP2!y}Y E +cP4;lP!j8')g 48k0hmMY8j yYXZӳs 2$)(2zLPP0+AGA"Xc$f%E^Ph> Ѿ6L674_&tCٯn6:d(PҮcIQ*tLTL7{4,)B1Yѐ0&rdkb#Ldk3ia-F* LVd((F c79c#;֎s%bf4O/3*ͷ7em:?<}3 |t[jG&b L2R,@Vc;is!_7_a}v0 " R%@vMZ;:x P)@H#Vr!#YtlOV`)B,h7A:)`h+L5ЧoO;jZ!u;FY}eݸ{0IwfLңgjWy^8Ӱ#6`vTќMnGùvF pRvFG6v3\r/~0 ӼB6^B_?ۯ?_\.&^MG3Bڏ4]u7>}u{iX=3]5\tֻiRI1L\J`\NZ 1W ,cؼ\\zHiym&q$uU?/_˿_?^<,ƊD,.ĎHShi Q(L;@eI*X_K=,.Zj,0)FQP" eMujJ*%Z3bUwpǭC๰*CgUJ#{ك2Z Se^uy3xz"6^50( :ѱӴ]M*9)3 qw;Qu`$I.z}T[2eu Ng9kI. -^FJX%t)it4FPn^Đ2נl[E2βX'_4gZMرw{q;ܸ kW =N/1:еOi=nQ-63&^xgza"Jt(IbjVۖg`8\m 5fo"! Z+,sJh51J4g?_Yon˶,_?pNlS,Rvcr֮,WDڹ#tyZb6&ۛo&R9 DuQJ١sAF]B0G !CH5R(ԃ!ɱ۠EUarP99I~H2 h[R>IĈYdvW(q6|H@`R`,Y&"'KcLK<$y߯pztx{wi_CMct-hMcM3UG;j&iK<Ӗ̯USH^(WhCmkږJ kF{1KDgpzi]Mn//nt/EE?/^ϻ~UuQQA۶~T1jfOo.FSiZ/jg ]Ā0hGjbǢj@FmB 6K*$Sc6, :琣q)! l(2[[M˙I5E`"})ԡd+R5MW|czLiK<$byhͷ6WϢɌU-|}^=&?m}qpvoŜ잩W^W׷9v"7- Sy}ؿ(M]S%B~/,R9*<ы믣r{&5\L#;٭O:c捑fooĐkWүAGLJ}8pOzG{\_zhB/O0=ϟΏ{$/"O|#rcJHfGxkާ!̗,z?$]f |M*xz,'j\n$7dmz[~,cmh.sh{>lY%v Dl<뛯x>x2z!yB P*%K Z&R@Kcw&36Hˠ(>fSPFREш YK[7h%~O7^Bͳ7lnKɕ'28K{5y%OI[r*nc`VG*B*@prZB0JSk "BfXtb'hX̹T}]d$*lZ[fhX6ӌCmjl h j =R'?,͞o /t4^M't-vhfkJࣴ6aFbZB%a\JxjGI ҪMm],$|b;@_ ޖ[RgW\j7ӎCnlhG\wS{S: J(:\W,UYhLF R1IDF`vԺY3qEP_nLad0<~jE"IuB08lMKА:9{_Q"BJ3X꾗$D"صGH<aGE<ڳxTwfcD%q4j ٴaq8V;UZjJplQJ1jڧ[]e]tW5(NpnS}'i7'?0{bf+YL3.k>|yf<ףu):_~u|V$l٢Z Ǜzhן'D+]b:8\®w "`6lو7čtP)E$jAi@OieJZYCQ @NQB )~Sg>l?\9xէ:˳-#.Q=Ž5zf_3rGd#0O'-pb`xgPPc\dG2K l涧/\pyopIb@mDHm8L8ffC1).%7Q)&!yTlV| H2Щ).h9cd,,fgٔr*j  4G!ViT!s8!ؗf2)yk+*$vSn٘y㣵䗛|ŎN,~_ ,ٹ%yf׉$mBq .$|S(zߗduBP=s^h*A`Zuc5'f]ZsHK} *I,Hv* IеCӌTY8(k:$7oaO(cm\f|ˍ<R[k8 +0:I3RvT+N8i;x)i#TNZ`kX &%))S\0}g4~4M ǰ޵҉=~ûD3^;"8W\WSVx%Etk4Dǫ͋*Y[@hk#²}]=*TP[P+VX#}R>#$AS*PZYGd) 4^ 1£<ؠ5ӐCѡC_ձFn/CkmX /mqKy/M8Sa"K>=%%YE[h[JI$QofQVԏ19dPEu& >o)$U+ t˕2K-I!ޖ5y.BR@HmQZL1۹JpA>YHA(` 1AU `OWIo,Ce2&zNG4H6F[ƩB8w> oМyrB'{K*ZCzq %z pܰvt ̈́Rƈ$Hh;iYm\D"F! tAmtmb»d$~"l z$M%=I]u2Ti&JAuDV(~PThBū/dt1[Z :YOHzb iM D:#rmF9n7&9,E+ "yv%d鏵wׄgP|!w6=ܻA)2dZreEHZD.j[:  :]4r\Ch#_#[U}8A?%c *h,76:`8W#t0,7L&҄Paz"u~mA,ME 1AxGiײW5vV(}K!m |Z|TN>s*QxrOKY'd"ErZ,;t2M@>:mb<8dhڲ! Ly>˓PHeS:L͓SQw(im5zr2K\ָ/غ9A/;]hf%z.|BID` ܙ!L#8ICx +v#nr,rl~}~:uUܗ.9 _N㰘uy{BUPY(n'}ۮG_??/Æ(5#Ż+qx{:/&5IЦ*#!Z*ߗ7o|I/xěN"[a A=;Ԟ=3C9]J{-gޞ=s kg&מe;R.[SL i)B\k=ibHM۫4eQ.Qˈ\ׅč6AoS.G-Q&IOD8)F_*_UL"&)~LSs( ~׫.7+u m81Ȃh d_:aPNXNr߳qǫVd:K8gC&g7J#M_*$9ݡ͇vs{)jxq| ꨸1E}y>o.'f3gkznG.Iuxpjk @,tD5Qb9DA q8t`żDid#UF:dզ2"fH&,p2֗c_x{GQ}yKU7J\7 ??ç޾<|soywx6߿]I9 o C׃-HoFkXls _dm] B[}?[+x0뜪Y5:_h3z0 b~V Džޢ\qXʟJh&|~~E{mhq0tw9Z-{*_G dƄ~R;C=3 E3ks&FG+$Wqk{ʊ`*ښFJsmd#_z|Wu!/rj ɲkBߞ q~Ȯ@#3řP 8'Q1xΒj=k;{P'^)xBnדxv(Ș+:i^|EX?-QIf>y I! qo h"#%&p4:mEx4Mu&q ;e\n|7/d(JU'yK;y% &(aY2y YJD.w LglbsY5V ||:MK7l} 9kT ׿^ܚMT㾜ujc; nu~䑌u%.i5078I ww#͵ND.3 SN(QjY5?s|7 ITQYׯL"%~W5462<P6Y=2]GF)=vQDC'm:=gbJ뿍2{OpsgyUoC\fXꝱ2%5['G.ѝ%XrrG/{(Cգ~zz~q+˨q+J>B]N])r |6]p6Ԃa%>ԩ4.31+1Mϯ>5xLJvb㍧tčzZ])DW +ѕJAt R])DW ˇ x\>#re^fW]qen5lk/siD]{.wk:'LW\U*tk5ߴusݾ>-"!A}b!pca ,H봏^ۀd&HGQC Am~o8M(F+7 #7? {\wg|]v0ŎКޱ9b:)^D$ŏisea[>qWZJyM{+~0=NਸeLf8Aj4s0J('wNz'JrN8Uo}8<~w?ͻcup| :? LqX`F ԅ~d_>NZжH#iaҲQO'|9wG.4kVQaxyu0:;Ͻӟ|tkUϠ+a('6ټޢ\qXʟJh&pUr)6&]DKwmmHyNx8؇,`/AHƚ(G,ou$[R[v@8j6UfU}E~r0mL\er: `KRBBߦbA({aiV}bP'Gn\ɗFi%պnܳ蛢Wn!,XDcYU.'%hG4.9@2ZNY}}r-6T Rf9<8흹fZ9#x$^kA鴫ɴЅ8L0yתoզoܦ.Bф.jN~N%6us<ۈFY=5pY@:92'ƁgU~@a.ʈ 3n Apa'.{%HXmNk)jpr?Aj<)?^=^;jW#Zjnѽ^}׫z3y_/-"i@vG0ۈtv=̟²Q8g / |͍nJHuH%k%,kAoפ0#O>MnG[LeYe,iKq`XDNQ8Nǝ dRQW˼WLt)YT!y+c:*-`e5$FeCP;*L未.QAFctU:%w!&(+>^*a2P" 2,WMN'At;=,x6ԬۮK#7 ]۬w9 pϘk5$Im%LV+CS[Sʐ#Y r4<K7=1AAg y1!I.y@A^v(\NB@JAe &a ~gzrJq85z(X B>DJǀf)+j 'y䁨lX.P{|3熱z<0V쇱fCbz!oF]~`8Vw_I& d'[5OdHRHYAx7I㋸N햠{"AIbW l{Q΃ɒߵ׈Uuj2(ɱqm9h I4S\!}0)yٲ/9J{)TB)Jĺ47>YfB D>۽0k=G3l@GCia| "bc_&zvQHqg51;h$4#`'bgSK xUrRLT6J#l,Ia9ʔBBmc1h Z)fBɐݪ /mJZ12 ܧ2DK~lT7&n`ZB~?\(qCULﳻ)Y@zᅻw=kQ9k+ڬpl.- W7Hb_Ej>i[fvU FahRˆZ6ڽ}Ƈ;_wyPJoyu?陻MiQw~끎<٧)ζ{2MfQb6Ͽ~u~qenc-k_(6w1w4څ6F/w5MI :)5u(NVr1o)U>|0?[`Xb\1R +hS ;HVJ'7 &kL1_7>=)t8±]8ZZg.^DbMwg)8A--_A8 #zNOM&Gn?j@͗|tN7TR|tMZT"Cx>:M@2jB(5FL$*&iB$v3nYpnι382In5Ϗe tzq/G5ö:x]I.?5IZ)OYcWvfQ.M89F]9F7ˊ]O3@솶^+1i3 tܰCOFOycr9~2qt:U2r+M{?;o*ALB")VpeH^Iy$X `|lŬ27cI][LfGV s41Xk8簊/v\hnzQ"<9D&KWz7xzf;s)fZ{tFx]f[ά6;BFsQ4,h9Vs1K.$,)l>) 78 2ږ8-~r[Xmfk Ee[([xR[/ҟ^i~#mt}ޠx8?-v69KR7f\` 0B8GnLty'!-vR= &c }x8-vĵ/V8jV[V{@{q gZ#*+hgm .@SR08 (e*ƘFl}!f& +T=lCNG%ͨ788"Q qfs'p3%|\AL'#:یJ $o}|)uS;JIV6!t*sY\,WXxSE/,էjX+E=.do< 1wcy $TMnNоN?*<2B)5(!b@&q10I OVVsWsȊ:FxL&p܌'_UUgL YTЧ/yDa{;Y˙]Žo`Fݲ$_Gm _nMKpy+y_pGYe zKVei0FeSfRN9ṏJI0iI6pRD)2((jMql+iwFfCBɗݎEᴂ%*Y_nvّ9z/HL LP 8}j}U=AZ!/%N9cJxz& )K:cB gg^rvD-3RZ}d$59 H K?ܫIGH̪;>*PS4PwLAr%+>i93Sy0djՃZx7!NZ)o7b -7r HcR&_iH|  O TJDQpk֜8jns%w#N+"2[%+XFhF xsF30&+9h[~ݛզȱ'*+߲D0[Ƈs6ˇ?-A0>((ᯉ,^HZ=.nJvLA^ ˟ \Mu۰vOGݿjŕ/Xիy~]~z3WoWs!S(Ot]Qm:ƫYWN'dW oWo8q[gMEv|^A'"4bnϮ)eȅx)9A'?kݑ :[gc>-w HBP1"*e+ hHP!Ҋ`EKE91%HSF4(x尺8ꕎgd<_lHh:@)d18!M4ѳCHIIQ83JTu51Pj'R`Q*KoM<\R'RLg`{g+;W%?&W7KLhE=xnRz;вzU爡U|DƘpG' z~ٵ'.]=KZz?p,)a>p]Y9O6?'-FOY1u 5ue @5gѿ,.ZU_֔k5G0!h劏FVb>2O7;^OX%0e"sU-:3~VL7]Œ`O~]O'ayK]= ^`~9 T H]1t`L"fc״iu%b|gobO>Vo⹵l[mYJ9]븣|;-`+lzo7'_/{6?z`{ouԉmof^lqv}1O.c6lxtՍJ.骕0.\ :Dx'`_S W1X0jaFnE C{ys&t~zs!g}Jʻ473 f Ffn.ǁ<^=r>2hOXha0G ]aW{6(Q6}ǴB^CJ7 ,:4m> ?Cȁlz~WSQPA|Dv~M[Hj4ޔy NkWkf*ZE#5՛.]N75.s{g/|eZJF<\tY@6]Ά5y;߱LgjB=^/Zx Kubv2]|>w<EM3ށ'F߀]t˒ݜЉeSTE ;kYJaea9YRAMJ7v%v?Sic;q.=8Zǩcu}7}7jOeTe&[ʬݱT5iM8*&RXê6CUٕTB`GuǺh,Fae{|=u(vM+S6\0kP5*dkSsN$4TϪ@dVBY>q`ci 2ByW}- ox܇cr8C7jNy3Kh_C(hĘ3z "X3Ꚃ0CEBhYʚ#NZTId-1j/^CˊB*De M-bXUhA?GQ &F Lu*},Vq`lig]Jk1tftTd*d5K:@{b#dB1:&!{ρz ޳+B*FtXT@J&s1kRGB6ӾRŔJIJJʎpB Hs YF z-t@;Y9PR!' &%"0hZ4ѧB1X Q`Z6),gTޙLK6z`:8yj: <ݮ/[ 6! <7BJX9(ٽĢ*)V6'Il -Q`#)ZŸBcƠa=Plm19$KoRgUu WMUw¾wC)Y+DL,EȦ| BW1ToK :PbT9i%1ilxB<9X"CX8eh@%v+F7L7:$B9X :Cl(MSa6#JȊ]bh@Pml캤V>2}kqoj51p CY@:GV s Ȯ.0Q\ q=8 WP6V㘪'❻ )p(l>xU%Z-5c&`*.kZ`|u )U (:2[F!f *po)lsTFٓs8pT`0qY>D>}z lٚ .-dwsJzH/Gv'v]<4;71( r$Ԏ\B,g$W!]B ECW31XhdI;$ޅa6A+kq ?&9vw,M|{er&>ɍ\|7 Ie'O"'}Ygo.]RkjXQ޸ٵ5hPj5&{)OZÞb{|N"j3U@˶#CiwRjJcX4֦c 9~votN;rg~֪s182¥xW )C)9^V,T*U '\#E 6yu##n\Jo9&I8Gmm$S3٣"ou`y@KE6`TQ{1Rqt-6IQuH#iPlQuv݇zŖV]}x '*(Z~|XN ]WV]ɛw۰Obe9+-5 +;B"oT4jK-b%P.\+]e^:=j=QV(\C_p[c^{Z;* 542&kY^p0 XhF,|iwsn6574ϣ^_\\}8Z[H)akSĐ 2+ElK,CqncNP 6pGol $I Úz^T $uK@BٓF53<^m!:;]Xݑ}޴FIe A D6鬝'@l(Vka9E})_܅7@}.Ts2At&@2 8[ !o]oh{=Y^rYflwn,a[PF6](`b.>qT)F7ep"Fߋ\k6L[aLN?kb3Jw%kkMV `FCؐv9!qp \L,߃:֬BN>?}czӺN,er/.c#?z2ѓ"B-͵c9+7o}r]I&}]ԌaSN>y8xuU܋G⁡ij >Ll&[~YpI6-V*sEVti/nO7OE-%eI쓒RJosRyZ\K:܉oeA?Ѵ\4V+qy“?X8Tn'_*][s7+,$ $/ʃWn*s*NvNpS$CReӘR#19Hy88C@w3]\Ϯ!S*9 g[C3gpw1IlU>7y{{c]R>xfvU}pYgK1cwW%om-PKmͬHrye {^{^ ]ݍ"+/7 !Gf.fK>>=ݟ+{x}AuX)NHyS4e$w,`m4)xF=V4*ɺF%qW7o__77o{{˷o~|V,h tw~ىv Qyׂ57iޤk+7W6#kB[8}1{盏L1??QlV=; ^q?_A̯˽& *kTP b; *.:_m_rtM|YrdOZZ򹩄UEZE0&$dm04wҡ#s |QcO7n{Řg"׬O;^H)2 cRGm=:TVt;lLf|4EZۖ&-2.;ϣ8ݠ5YdxKз⾟.lBH.9瞣d .w͌ޫݲ֔v ZA{rSt;kJ$84eW 9F-SFJuힶ3RŖyIh29>J@xK-$p'xb҇s7@q.K D1*J' c&ˉSkl'&d\$0M|`$PL:)3D ݔB o=36pTIē\x=qh uN60j:&eCZ;p49m~}ur&srOT{Aj?ǵW`*g= {3*ߩ ;L2L '$,O^x0\k"s)VeYD&  :᭶&H\Ҟ-QR% T8Ff ^OFYZuefdk!u}2j,ȳXiܺgYbT^|:/[?Zp!LAXـ$iI(Rx΅:rO.$;r ɎȅdGB }>-iMʓH+<'(bOm ŽNK.qj:DsY@q] O("8r+D$KmWh9ucm<n(wȿv!_^UHapHKm0>)ALW`x{ m#:ss[#!|Φ)C҅l\U*ƫ]wfYX7z".ѭ#0;kp|~lRd%xA#BTN֤@5RuJs%Gz$bH ࢚@ ӣ \&"I/ g1*DVpK$J[z*FHTcP 'o3Hh!Ka'eo5r޲(u1dCɧ"㊊>rO/x4ڊ"7hpaW4kXxh3앙te7=ϊ6k+YZ=kLZҦlRm7[}g@)/J.ƟFv]m߭f MTͮc^x4h/Jnot917Y󀶛2CW74\rh?zMqҜj".+3M(HO( 6dB@.'JuL%U](+ % '`ɨL.!2UN]}J ]i FCDULEP_7c66,<191r-Vz$†$KtXQ ⨱SVaϛOXd_?x,;%c[q;{Y~F$ up!&*㇋Ưi:jG3N%zti2*?FvڎE\ KP$d&XNhDJh2Sı&3tɯ04o_Y~~CXFx M}<,4&.0NdfxLs`#Er6d䵦VDChEgieań5s [/hqŜ=yuܣ y{j<Кe~U=N}WE˽98 dչY'Ŷհiku|:߉ aj^q&33VSL$R+8K?n\*׸I'k\Sua^p˶R+p]{R}jaB#7i%,%ְ$dtO*oS=GU\;7i.⠙fr98(PˎDy.A5'uکg`BPEK=:|wѻ]).#O+I=,.5]o2&@"Bz$ )@8׵}ǭw. bX7@q<8!F ,L̇h pz3lēk˸뇃ab6Q)]0Qg|כ[e uz:5[}3$cIncРYntbYfv_Ql8N xd| L>)ɃK(F0: 0^9-`g$g,1k)8qi@I4ciu^1)*8w%x)=^nyka\dDcN~B1id3S\H, KoBM[%*:E&[hk&a"G a1]X(0},c86~WHN_TyCw!?vOp-Op%hʢz;]A/F0dKPqvMw޸fqpɐƖ K")ȚKrr#N;RR;gx5VY#@mT% 89PFp^P:zph jx'-=Xh5K?VNyDI3@!Ds"A-goY=Yd/SW``_)Z {6YZ˵Rhy{}E'ǻt& w~y^.o(ECWM6dLT s I9!ZLҥ ʩ%&$uۚ|XOwrv$`E ILQ)4)X/7>0Lc*ӪY,SQi0cN,T&N Xj f{X'RȰ5e9krVg fB[P!d(AS &P p!Yq`j*<1<4Ȉ+`R'm1u=s:%TNFk`ʅ`dFKX%(`jlI-+.&0uLAs#"BlL`+% xIbsvza:ъxGQl+/|AwhnMďyaX#8b^mYHvK^x^wMu oF"",)XSeVWZ;KljTNڋm7?ns_kKd[- xHX b5Q?{Ƒ8`s6 NB?-%)+aUIq(lZ=N Q3_UCH$DhDa)*FhJ8 hޕ8crc2VȾ s77hBMzuE^rq\ZѲ!eyb-'8k-lR;d5 05HƼDI@tAũ NPyCx-# 2` 4p\DhhaZqjTIp$`Oʾbp"{RpbC I}pQ(Q4xf t. ZS]Qw LpolKPa҄FQZ+EANS7)Eh&d"y xRLB&(xEtNA/PЛ6:KT9W98$(V.ftr.&]j|Nʽ^ŤۍN(BGQJf-Ȁ4D), /S 3BL0at6nK'ev]`E87ul͒@Hk3uMN7٧=zr߅0dT`oRA ,;t\2jS hN6T&W/%TSO?ra.Է~Q\2Oz탕]rwJJ)mU3_JТQu۟^V7ÚGօp9 Wy_iOm- Jhl2D'R(Tׄp6ӨJ8꾖P%F[56ݮ͗v͕ruᆳ>^RZK%֛*fo0.C[jسZ6r-ܶoÏũ6v۠;'?Pq̷hYm{7xcrvͱ}OZqݼg?2,h!3?T\;X,RyD#mmY!a4gL9'/pPhK?I'5N-e9y;84;pѣ L 2_0%Չ"Ф.ygҫQF1rvt*OpC\y=d°SMuL;/Ar`m@~VͨJ8XJR*Bv׮'2-#Vy Y)J@xdr#(R}.J9"g a)st=N'ƴ:+t~:G6QEkqMNk;pk'ޜ{nRV6G ?Iǽb,䤔6ŬH&C5%9HNR&PTLɁz ɨF'yO[E1:1 G!*T PH!V)& }e!+, Y' * wddܒ&kv6><f`0'ӯ\bҙ8 IO 1Xs] oU*00B.6٩{)*I0deŜQ3βJ%Q3TwvĄaU HЦ.FۏqI<]lu+ya;!ح83 FdɀxaIIuQ(c<9]M]FRj!"C*Pws!pDIFAҙ<5GȂdplD%$b']0HI@M:EHP4R-NAj+6DPT"ZC ACCN-Ql` Gi&k;MEn%b1rSQ.Nz s*W.rQvr[$쑚BL4vDA.5x0e4Jc]'"O&uձdXػл~&*xz!jF˛]bypVp&N eV1' Qdb'.rwH} o| EoLPm?GHs>vol Ve _+,ɲo:9IZ$BBC"FFrHfZs&8S1OR@ %'  culIy?MJ%>ǵāf"k:RO-3ԛWO:<wǙ8c̎114C[c[SBQV3Jo:6Nyj|\ۼg6 LRF9iðM܍y'flnnKG~ēNYYg"J}틾ڨ3|U(jҝ8y N F`3G{tF)_G2,FCKB%EbH (9N{s:DA@N"Npolg<T&MGmiґ7xλٟ@iYyIS :bFk Ơ#QAPf ʏ…0U̬;̳hb["pJ,:Y*8Of HQ3' -gYaVښvaȜ!.A{ 气˘9L/!RY$NDncŒCEn67~4 AS$(l fªɤйh_ ̙eV7AZfQljt[?c@IR{jl`ޅQfɵ / fRcCNm{J& f6ljx_)5Q5 daPN59甇37ܝM(``{ͽoHl)<r6ͬsMn釋E^E0f)UnrnkKG]-HQ‘}5k˗ӫr=tN@xgaV_͞Q*ū7W\D{(UW+Դfͳuo^6, Hق+{~l EԆ?L몀o.p1$IkOBn骭ލ,/wȈZFQ;h8Uf7^/kۻ2"=Ly$w,qe47_szFaMUtCfgп_7߽>|?.Px_o.޽}Οpr6$r@͏w-iy܈]V%ߺU+} m?,֚8_xa? n~YC]~]Y\c_Aͯf*4:U\,Qb-B b@]|mYت#2#߸$%13&*P02Z#e!x!ZIC(}n\j^H nB'FLpU<>LV )4w80G4lR*AA*F4jPĞR1ILKFt{kW7e-ғu <꒴ϰ<Nj| y$zrI1XbPGOj<Qy9omP @kXX毗P-F ? b-=TQa5vy7EFgUmZwwoÏd7rAGy7w0x;8bA-AD@ hvJ;H<MfgOGT+rjP evezYUy\OQ4~+cRA?{yή-o^E1FHrıh$ȏ2IV)' xie"&$sD*&]T_>)9pQ0Pwa>w݄}C~ƚՂhjX6!zA)Fbp!<.*n=JQcA'z1omp޼M}ȺEz ]t6Ny&2JBi]Aˬh&Ph LlcIA‹Ջ[7ywp(xG#XǟQLNnp34?ajy\j΃dl5ep,Ry( k>rޫ/gы?7^Mx!oow8><{8iRy1,[(޽ewZ64^V?.~U_6w~K;On5ߙތ}To~mi7*ث"KY݌fE}o/3j ?9ٻFr$Wzl^4am4x`1^TG^`QNI%SR@V2)&*yRid.+lDӝVvw)&[6aX0pM8&J' BL> 8hy-#AU'BJA:Leƹ \g@ YxoGmȫUv?C'si/F<[g??϶2lډ<5)0աOE7;ϩGC*Lh;Afc!phx@ee+|;T}R *B `,dx9Vdf XP"h,W.p Upt^H*P7Se  gri^ܓ@a ғyXj](>H⏍eУC抌6":)ixDxĢFLqiwyyo*0[ .lҝE:&rjm X6:Gc(`,pNxHQEPBLȢ1Z kj 90`ONedC\y[F^uFgH&';`-FYJ(4EI.,itr( $ 9~ctٓ7m6'%Vj.w dz_"481-b.Ε` ^^/0cb2CRuUֆФXT<Ȕ| ĤW >IoU&%Zea)R1$.2љeG')26'yLrүCO 4;ga{l|\]ߡu@(6}`}4[^ZݻY嵝qhGv7X'\0p<|N/EtN[:]R!o;CjYwպ{o|COZn^|a熏ϼ˚3nx="0/zzitWj?޻xhӠ'[O[ ?- 2ǿG'j^)RJ:9+0cՅe\n%ݼ 9̾3ܧƶ! -`RiR8)I iA8-pR8Z GKh)-y{{}O4`컡.U{.5zeN4vV5ո^k{{q5ƽ7RZJpȵ"BZ\ k!r-D! -R5.)Z\j\TqQ5.1+BLM3k\T ^6.EոUj\TqQ5.EոU:o}/'|NK\1T]`Fo;@y%"zC GŘ9g%?z29tF?ؙPN d̖y9&-FZy%CL.wrv onW6OW=YUXզqZ7#IКaB٤Ii]I%<@*97Ij4a4BYeCS 'P e$xDNFEQJ&͌ި𪊁R"%{p &E)V(H<7+H4812ՆC✏i5s>W^9!pNEzA-lW.`CM < ݝfx#+EB")#nSP`ځ34d#wOs=M yޛQ!à DI]̵\hi%Jr HC9ph"l橢y \0)EWSA_)k^TF$DZIЕr[y%^Ef9dFS7sF5kR_O7֛sh1 m&PT۳ }R %^x.2읫qˍf I+yL,:\JX2\Г% đF8_~[u)e-bsMnӊj`b/<|H%$y~H_lx!-cWS$ΨM=.y7_,eu|1Zy7` )_.gneJfóKG8Q&TqA;k@RX花*yfavY'Gv \^@>jzu˞q>ǁ5HmѲlPZ/P&oʾ&$)R 4WEE7Ow7o~q1em< dn *M8@W7..sߜQ*\_|s)uHc-Ҽ5wb؁ y;=+<))DG.xK˺+&hk O!jTt&)zGcˆQ'.]Ym8Ahѡמl˓}iw *tdSvIx[ , .+6Kۻ^IQ\wS`\RA sEJ b@KW$k.ecIn5]VP5 nrv^8 >%3S)z"IB3An:,4B }_J0A88&u"+ fZ$:YYΪ gG9w f=  AvT"G'%Ȩ,d$)E)*pMX A0Mj1T;,( rRV~Ύ3%ŐL%͔\5UjI󓚟ޑ(^=آ$W1r)$.z4OXd;rMqTQ:ګ&:{uoN;"2%+XfiDX0q f&3:lVoGv7K]%=U[" ,|<]ryV\3dAYT iHZ#騶BYY:䏸 z7/5dt}uPǥ(=(a'lr`˘Z,^&sqbهLzH_` .7|"4ï_p_w>z+}zHIl~~hv+>7s nI.i&id[| E{9-ᬣ~>bE4ˇ?-;胞W>((/4Ok=.>(nt?xuc>_-:jՋZ]=/=w||?_i1w3,OtVQۗ&*OgȲz3BzyvD_zɬc^dv>A'2܁˶9C\NA'FA7*^cs,?D(mwƚ<%Fj$;P qt9PػU2P2݂VFC@,0#:8$o8i/s#g9d @D S|Hfpar7'M.*1&M&ALr$l P7iB1% %zk ua hH M'+>V*tRk׳ՊTaی{A"mJO ̧XMFd#fAV(~atihhL2 r 'EA&@4$Lk"W5#",@F(ձB`MF$cJzC~s!U*6lӋ^V\<A`qS6W UȝCk;:,v^!ǪB|esu{m\Eo |/'%ơeD H! gNJ#%AeB21΂v@QZ[!hs48Yd@_;5Q![ܐOʻՆCq iLLP|@*3}T=bCUu,2Ǧ7Ϲ Ы_~^Oj5^qF+/wdGq~ { CatҥN~_Sf=^nǶ^OvKe [NSoPDP*s}ůZध]_,y}Tv!Ҋ/a~U4ACk44o]wdLyvDy޶7*o2KD$E!#޵q$ۿBf~TwW\ǛH:kxMZ?QՒ8(Eq=]q A)G0B0K Gl4ѫgMJ.%@M)z }ˁEf1Z99etVy]4A* VsU){f 'NN?P3/ɼWɉ`m {MdFpa%/,g4}5־X\d(UL&<$d$FH% ! 'kңi;wKni}?gL+p,S(m>rQf&ejԪV]{KןfdXmfILvJ)P 3Cn83¥4YїU1gskR5Q'{ʌFY܋T&f۞ Ԑ3)j\DR;Tj=cW.P\yg3"N޾pv> ƍF/l36Y*Fz\ڢu uĄ Vc٥2D{.EM/() hb;6 !e Vfjpngl?Ǎs_XZؕee֖=k~ gҔԋ<Ǭ `X=pBfG+f-  3@Qtl- h8GǬ" dɸQ·mPWqoZcWFʌ=#xAf4Ϣ[ߣjn*d׽8OM='PT{-DbE*YZ$KBa.kCN'z! ϝ3 l}u?ep!r%gtNrˑ;REa\i@cҐ:buA_B,u $Q20(L{98hAZSԢO !iP,*W=Bn8-4{ͪNkH^Lw1I^UR4)3`I$ :) t^ qp\C \ekQW)r)Ed !2ʃ!BpzS>8*p'4GF\y`%V'XRwNt( &~{Kї?~ڴ&l8aG,`9JK"4M$wSP1٩ oolϫS.+S&- > &׬ )&8㒉Svpd\7?:?͉O6kߨYqrDqR`T(t=\ӻ-xN 2 Kšҋx^s?[ q{)N]K$V}kS͛KF,Bhm_GpOQ_=uզ?g4V5 W9`>.ielS{w5`Jshxth[i/vc׳y+U؍׍;JضZK:V^[,cOZ,XFxtjEÓTn.uUmn` /dV FJ'Pn}6#ʃ2#dH7j/n6/._~w?O߿pH1~wi < x8>R~{ъ*[-RYeߡ\-jVyKWGomzlm Hʏ_~ͫv,:]UzqMf~5 -Jp[ A!vs /q磼1FMOWw@˩C,23~%8x9@Qz-KXdF>Pz5~ !#eςyIQH9Ye&:4&t5TSeagg{sV5[4-.η~a qDZp3 ӕD̄V  %01DZj-hL#0Wq2TJ|z:FCMI4 YlQDT~P?}i CPE>eze&#Iν{lf&މxo R83EZaͥP^_ny|W*e8A)3K53aU*Tysy`,;DA1GNVkg9ymg4F N8ְ@:e*(X+,H ()3#s))}x]hVj@wL7 ! 12%epQD hњM]eƒx{FO.knA"ֹ[v8dT(/y"k>yƔuO7vIFcu\!F9V7Q6o! IYG5j੢x.QBi@B(E$YCC2R{IU R\H\֥ȲRdΐ5 EKxoa!7cT ,OҎ"[tcśH7Q{wNmr_?>FHi6ư3{ \z7+ Н|]{ J{ztÅwn.I?.;ˮ,O_&[Or}q(IA ~wdh$9]|9`Dr#8"GqiLm.p M J{fh&Btu/3jtu?h {=],Ǜ:DW]\)xW誠U{OW=]@ѕqjڮUA+žUAtrJZt|ZtUj*hwEʴ_"]TCtVU+YW誠UtUP޻zt@ki:DW>֢+i ]]J;]J HWCtUjvfz6cHWF T*p{{>Xp*(zt wOcUEÁF7XȀ[u{=ꭤU]3K їN-Ji%4J!.FpHX>̍>e[yq*AuMFi%&[,ڬAreIzH$OT G߽^^*:,-KƩmީAɠ[>`ixzJ-IV}iy\yQ˭~TN&q fmP$|`{Gjc{=d-U!_2^5qՍu17Eٝ2JuiK:֢;{< \ٙeςҲ>V"r9;`ͳ`ϼipxV'>rcRk芳=]蹲qk4iqVtep꦳-b U=~{Xd%|7u9\)*j;=?m`ZYAxY'g:$M'QfTk:X n.p 7J;7ⲍbGfډ8Y:友7/4*7W/w W ݷs6h`&yTR:kY\JfY2\I(dY {]$QJp>L&FX6CCzQ˛w~[Z!w~psGo?%҅@oȘ5E;!Mj'iv5<9׋_ Vt,z9>kGtCd!94hWFc:i-,w/q]k3;qB K}IRfE!# J<\z9je^AUIyKI,GnS$.2њeetVymЕ%]Ro}7'9Onj"JYV{\E->u) Yޛ x"+Dt dn6  pr&QJM8SPJV0=ne(UL&<$d$FHEpB!Vsi?9+~8vӒ5ku\J̮K|k+ ~ONی[tͧmF멤PW'mTLz[TBlյdz+1"R62fFqgF&#c \svIAp*;SUfh(޳oI#,K)`@eb^D"LUǹj3c583c=_Vr̅G5RY\\ls56q8YfƍF/l36Y*Fz\Z!Pk1EdHN^u@P= ) mJm!An؎lsHՂs;c8nzǮ-+Y`oS(>x$Y8$!{Y+4&z8:.̎bWJ ]׃[<*gȁ km#G_vmQ|e;d M`QȒOoՒemr;VLVSd]cUX&!Z Hq$|HY%Ix3FxF?vEDa@xk GHƛ{'gjQ"ztA"198ܑqYJ3!h5d$sq X)s?"^ @8['_go\+.qQ 8]Ffrm1d4/NZOJA\B[Ò:?aoܱ+POap_xzc10WQ˧tyG W9Z( hJo7Mn:gGܸW6 z&\vzF=ndJTIu>i4V^V;յha|`/i S9%dRKHJɔ- JsZ8CvVwA 8y A ҒIn@e, }&uI:s@H,*gzz#g Y>Ml6} 1`%Q0sdzŪ]\Cxi'"YYtom΄ĉxZZf*e`$p!.  o5 tp>3r|Ro2DDSs^6wɨClʱ>}^W-:dv[š(C on'=K}xѧ]Ѓz>pc7 08#ʯ>kp_r-(oY|tvGez;#-PiOݾԇ_n:jZOOxI̎pt~#I۶obf%yw!ÂX1q] _ED>ܺ 2Gcخ~yp]5_]^+_1ʖmlx0gذt }bWEr++-2\o4k`J4Q }*rZ(qad{+L(_n3cU) 22~\= /dsξt@Q$6C޲lAxgc0ȬB,Q2z1s\5xKOAۯ^XwwoJWGtUX*Q$h{- +m Lmu&v_ ~K_-ݥYGBXOB}gl70$<$s#M"ubɑ-A{I-W%`pIR#>mQJtmpDF 8N:^Ɂ~Ǿ&F5C."ᖧe_z!tkSr獯Lg;Y=~eJ=gWS|~y{GeS\wU594)mIhOH 4R l<ۛ݅$x⒦Q%mhS1[&3.'G9$E 3qY)c4"9m 8&u"+ fh%N,g5r_|w0j? U9q@RѦB=IAb,ȒR^Ei+K }}oz ~fgі7 WFp>8Nk]W8gRAOՊ`' vZx/%p ` HRcR xKS‹ i p^_. @iv HM)uܸ]v|SHh;Pgr .ۺz6 I{+$ETv_BRvI XRv#24!rH dZWcՑ{S##e^ϴGQk<-ʁR]$ eIR፜55r֞Wx^\z_uMClop1o&mμ#4z"ϫ,lU 7Тռ^VmۯjUȦjՋZ=;dߗ4@esJ6LWQۛ:SAEX]n&l6H~(&bzY|0\VOS&K7 ao#O2x_0&kܒ1PSLp%gh)~>0M'BϯwMQ{$F횫`^ ɼ JF翦X>~xC8t]j_&U;ᗿ;wo/{}~y{wGd˛wo_ӮZrEx< ?? ͏x_Cx޵t7ֿ}R| ػ.d9 vZm{$k~}@vKngRɮ9㶏D( KGcϧ=>MG򹿛2HkҀ쿿UM:Z5%*h|rjJX!RC,I !z}?ddz2FDɲ>lSv%ۺ"-"`ar9mDӸޓrp:Mҡw| MfXJ)ykuD!vA0!c )!⤁Zg7t9o7x#T9yȿyWM9LGaWKWR"-V# GM_^ TC5 `Uw0$"î+2e~zq=0V<8e]Q[}6etɥtTGTF_ Rm|)}u6JH! -T5794|B@J*(ɴ \J5I\BY,0p"҃)<>QVƒx-z{#&O<&qG38݈tS|s2^ibK0tC 8";8Df[!LQK8TTjpQdG6VM# )j&- [1z4Z澭qsX8%\ldS \2" L^~G/070&n` !={GzhN`[|B[#=NH9kUwӹ/pJph[ @wbPl#  3crr|d!LMҩd)z[Xe٦ >ƨQKA'C@P-]mFKI`STb.ܒU&[4e%1W:#nI<Տod MVoBˇM{p m<כ=kczS:wpNJElyTTCQOXF0@XǾrF]wp5R%0XsTKLj,:VJja1.e1Tm7`󫫛d;[-12|""#E MUxXL~X~7%_RcM%)yz fQ'[=A0Bڑft!QiGB:WRhZO*ʐ$rrjJ 8IjW*5Q"'(N ڥ@> ֢.s\hU2vg2+<&L{ޖT֞A|>⏼`0\?M05OMJ Y1^=^xО/Y*{ 'ӳ t*=K^giԳ,;B ?r0=6؈q6-1kNOĵH:v\k|*ɵEB!n{6X!b;G CbuzѨf4#*2VZ-5 RMipSHZ҇Vg]olQL:X%%_K62 .ĥTB.⡇ Fږbbh?HшZ:u<mI/@u&?~ CbSЄ B#ISV.2Oख़fPȺX#f٢!vITv:RbIrtXuA[]/E@+jHѡƜ8Ơ6U|նf5r\D vٳb,s n?[:\^ %TZ}k[E61ܖLehlevзC_],ǔY:I-7iI=\sBOFL?D Iar w/=~zxŎAы">=AnuaFuo3yΧJlyly52u|z뎁oPN~ʽj=ݭ6qBt{Lee-k)6*X}ByM`N&/̩J>/Ii` |y[N)YFN]5q!jҢjRz;tW/S'㮚x2IKvIprW]yOrWM`:`PEu2IiI1 _چzru*˝O\۫[w_}af0pGg?*{oJ;W7[L_уg>Zcz󟛴 t{2nN&n요aڳ{n:(m׷k;LyXiKSL!@hLǬH{ͮKk.-N(-NuJԖge).>kP5VUjəD Yȁ"2FڗMUܺֈT-!Ad3ƛAkD`Vlٝ|hR&YI:YTd*d5K:@FB2d=M߽ۘ@o~ EE,+w$\Ek Za@ %xXLTf&j'Mqҽ~͟GuD5x{v9kͥZb!GPV貗KMrar]Ǣ|~JV_AtbN=FIs'7^/K޹-4qO"#bժUe6,Ƴ"7AZ-ٖIՊ+툈l_E~}kՃ_7ywGyNBC68bgA7Y *Z`EGt2 4 Ẉ̅y܃R1XG7 yua&E[X϶W-^z\u $ZM]BTzz֠bzWY~zMbPgBe+2`#)Z b\cAW4C8{Ad)[㬪N2TRjI(+ƚ+  !)Y vXM5 ; 5Xr SQ<ŤrڗhĤ}@<9X"C2WY3WN϶b iZdcOkYrKMSa1 b7` (llD66/[OvomC[m͠i;HہVGu~x(Bj;ܧڣgWhPnx/L^qqy@/~TQDj[j:T tlRV C_Wkxdqv#4"$'~e,ķlKyU3"{hyv#8^֛>n]'tt$taM7:ebq5+whi@. I)OPEhJTaƙD_hBIouͶ<)xK.ev ࠋ!C.bӂ&T+`2(Ά@0j !WNroS(FuHISM^QiܻXZhװ ʭj}ɨѲG@7vӌB/}!-3{6^\'u{l :s )39%hRr A I#f*p (VL-K#FzlKLC 87hinӢO!{#nNQB58=vk!Xv7x׶`)YCJ:Tj"H#s41ĮTY!/{x)av2{Xd/Zڑ%GSld٣%eOldRůȺEXiABDP\S* >yX"^"NC8:bP( EN.njR$:ɘɇ+y# Ak>94Z vrIĒ-:<'a|-1ofx^;E>k^sVj\QF;u֏/d9(2"\._KLǞ17S)E1d̥;N==!1V>jj\uzKL-`mUo?_Gl_i=ykGcBok 0dKTJ[J(*'N<[?^>'k?W"oo?FM K[zi:9 Kw'l6g}? K6,mLswob`ɻgiPU'\lx6;kGm2t"݋u=-=NGɞE=vڇشHz7[?vSR>6w篙 mk>[ͦ]USU=U3/U-]UT:0ZQCq\[ֽC9:>6֗Mmܦ|reGmmq!mVmu\ 6a׵]"q@џzdbNl[Z]oχl2G:!K<]Gh~ns%2l^rx E.VM v,6=P2OYl >1IңeR/\Va65}jX&z'uˑRm{=gXt3%TУfZZ[?z8\T^2Mm`'Ǟ$/֐P 6!J B rnY:2G#3g2Sި@wl:AV,׎Ya%FY"le̽^WOg,//&jB#fŅG/9C|:E6ـ`Waͤ]'Nwu "pk*@lGZI"Uiy#v='to'k*Cdl j_x{?[-娏TdBF)9B%J$I)'e(3cZBQN\hĂQAwxpV[8QȄQɻ MP:Y1rv[9oV"C}?S2n8Է_>~8W{Eyw ZN7U;ٔZ5TS-L7Z5*9Bj2+BA<_[A**j#L)3k 6 Z=|#!J.2(GAܾX/(\gqqu'r &!u6DNa\em#+0 EFw@tEf\[.t%+P|XrxK<pE sՆuT9^ۀd&؎8D4<o8M(F͇LA´g؛}Չ O /[|ўBcU'_>9kSF$ Y9/ }>~_|}I}=%y&8O{d ը )( rr$;s79Qđg˟Z ْyB=[]̉2Wq3\S"7 i}j;>;ͫ( kBi5:&z7I]s]Z4#򯦾Z}}~~_L|?R@T \m 6\^1yCP^9>)[njg}\OwAup9[sn i4<,Ɩ۱];(¥62 w?!m- -5ڛQ1Q8 N&rgmχmSUFd[m}eD[&嫇ܰ/g(R/ MwX]*&~[T6/b/?!ןN}x:?;u#076IQ4޽iAK5-viZu3;+Y.-5ڡ_5qwp法>ɗN&j ^X[/g=TQSTqXʟZhBv{]ت#E5ѧ1"rEI<'`HDEy@FkA`LI#6%p4ws8,e^Ucꐓ^L[geɃ23GeVjAij6lViϤ~ 3p5M5ޟ.[}Q/v7z%5<'OVqk &k#o\Nf#\?++vf|fLxwޝw9z»?N*2I, .ԊBs 4-y|2׃kU%bV +ԶI:J[I*R LKAO4g R&hV엢 ɏ\dGB#R!QBDmcȭIyYBp4SDaE♍A(:1/ATd Tk8.P፧PYs` `AsA4AT#a7~ǵsx_{]q 98 9Ƕx c`0mw$dtxT%u՗`XIPs*FVƆTeK;gCoÙsBZz};s\=tܾ:{1M_f!BZBZI2{p:w ~#Y/l}ȃn$Až*qM ?g7CRҐ`2ꯪcC1MUW;Gjr&ʑh5Rs tMϠ' V|6UgS|_Iÿ[i“<7o37{fUfO~s<1p W` 5Pp6lMC)`4E?yĄ>q}x;e__x؈W qF<<D,U@uT]3FXeK.,$[0ʣJJ (r&.6A;V\AB5 J\h6Z(CuRϤK1_PT`Pbfvz)S3䓧f.蒧~SYQ~!e9#FкntDmW}6KAKTn{w=(ܾrDP31mWn \ɭutwF(bi·?oo,d3Ӝ19]t(V4Δxw=~m`F\̕:Ⲛ[fsb(6zV/qSO7枮(b1ѸQ0yI> o],\ fdcou>Ȧ^JMR䋌8#eEX>O}&}]Jl3W7wXITmD%Wq&/??txwp f`}nb>ߞgO`^z] V߼k>]6G]OJ}ʦ-ځ_bSFAs9Ip/oԬ|dsGmWD?r W*"Dxh ȷq Au[uY&'1*%!r&04UToEZE(9tdFoSJcC0NQ:a.P}:E|#@{Ŏ5ib3db@ 4lUՃ5AFQ[O9N+;E]O|<ښX36M nl+xt:?);'UnZ3콴Wu>L(ՀM%s4N&@(P"p@ZS+,]ׂi${vG.c7IVo E_AMi( 3A*8WTk/8z P*&lswVՐ˵>$TOшA&*M]&Lb,4/0GFJ!):VI>oBdnDOLx6rq!(FQ)8"x d9^2q:~%F_"F_PD힁v |'&q=j^c_G8VDlBx<ޤG똷P葒hEy}me핤ݫGS7/&; ;apk(vMـo;@BL Y^=C8E|8mRro$Z3-vYG HG̡g]'i5XZ ?VÏ.0c<8d\&>0 y@:2:%u਀9#̖'偳4o>;L' g\Hw 0(a\(#LS TKI]S]]4# T2sDB;o<~J֮)9`*Ĉv!g2j$FY$:`ж bkI]VmV cDŽzmF$#ISMõ&R1IPaUA(|d8 DrY#!Y`Xb#VG#$KS%JQXTcn;x#z8'LM!3g/vu"ܬST#N|&R3w*E1B&UDos$ۂ Id4;ae%H9YR{;@*$;q NHdQPD}[$j,͉D@+SBqKZ&Yjk( Dx @>Pe"r1J8R5vvZ.|dEWs'Bqv1*zSQ+(Nn.m0v,)ALW ,|AeaI)eQVeNbR7XFXœ8C_mm`Nrkcqk$26$m r:u6o\HOPE>w;#/./,zQ VR֑VEN7I Mx؈W qF< !*S }:7 DfxYHaGWr&.6Fc(JHh&_#\6o)Zl878 ,: 3ALAVQo^F)'ZIBqfh,<<Œ ^ғG̈́KNT[4I8Y.zB aXD*T1h"QAcp0u# `T",bY#TʖekeQ@?!`I`h6-S06T 4BLnwݖPu\ZCB{\5xx2; /43Sxɬ 8Z}Dhb3N:S3`1^s4FкntDmvԢYZzOvޣlxy啖Ch7ݏm9}x+d>>p'azKqҜt". əf9~1~~8J:a-+[?neEwzjnT1 Ni@r"48,}dEзSz |,+P'?L#(sIƃ`ѐYH;!ULSN:“C8?"Cy(Hei{z*/ؼv><(3KI<b}u#xP%oO4XQEBGŌiHALVH [!R.$$a$B^2C'A0X"LQ< sM4 זŒHqgN(eYo%I>o6W$//2J-vӛ9ʺGJ%8`MtJ:PL8#&%p48<)*iGЉFI&y1[/F?H@px2y%" !T2,$*'N"y2\~ŝNg/~+&X8\U#)P`D=4KQxU9fJ]ޮ_ֻM߰YסEÞ2ԃ@Qۚfp?-6xu$Aw8%V&Ib&,wڀh 3i42$eĜ30m;1p1ޔ0`{DIL5^rs^Dar`imՄ1lpTŠG؆*jRg%ؔ`8'L_HqxzE?~`Wٗ7\ ѪO 6g0ܬ3ܛ43#0Od=z󅜽^ٿaa41S|{i:] f̏br'/~@6TdkF˘yhpriٻg?h<ە,;šYMFYT#dT{43U^1؄CoYlC5b{fʭ]w{\!7idPXAK}%aP/( +2b0 4bL+&=RzM6 Ӈأ颏K!W^MV?n"6NSB ۙmՃ6|n Y[6u 7JƖ .ZdZ/nùWQ#-$k Ły:uzp_orlȁG:_[df^\4L^1.{L-AL}2#kF@vqnHɢsvsneE,4R"{Yi4CY=V~&~*֣'y4GU܌7nacWjЭ}9'5ꅃǪ!s?1j124 g*55| х!)PDe˹Uus[.T Uj;gO7J $}IP"s.)E-#I19t1Eܶ2`!kExmД`(+]&`Gc@obȹ_ :˻ڊ[חVǎה*A^99E_,lVqk^gzA냙o^{>g?Uᡨߟ]|܉`^Ͼ9nF_p|ݨU.F%^4_\PLY}$Y g}[vp8$;@Eϙq1dc#/[l lbQ- /=6'ݯᇋ'WGǧ,&jqvrޟ{..N3=L|dV%ZF0MO/'̍/I=8qb25d@)`źi}tYl^(uF,D".d]0tJȱKggi>6/ib/kߚU!dX+"T.&BϖNNN)[:ΠWohl"x-!yQT%s%"B&9MY@9=T ZX{wD_5b_ul|&ҬZkU3q4ƘúO"M;ebzvQ/CLHVh"jw樁Kfeȋ. V0+RQiԒ֒ԡvآ&Fkk~l6f0*w"2ڤƢXt VΑU%Ufe? t^|'7'G'54_46N.f~5!>drnO=~67?;=p&zM6=r5 4ծ)/d |w|t-%KlXo>l+=dh꽿:ӯft>T җ]KC:i;աGII'kF1ⓗl2y4yw *{%+$(ʁt*`IFdSm[ۜ}FI@9WPRrV4/;ilFΆA"wBv.f^=Qjbmdy,V 2JT M'@yjhm#3u °.g-O)31X"BԤ`ډߊ K.zmwQ|5f|O//>y 黻+eK6O۔R_u}`5=4n2{&CbW2.  aXsJBPKQ:T=hȲSB6DmsHڦT}aj qƶX(cѻ/.]^UxM x47?? :rrxz#6E]l(:|4)":,rlqc*YB 0&ȪL1\[5Ff܏f8]l1.G\+\SIAE-Dl&IC1qQpq0[qǶxh–kZ|^QoQ= fa}ApcE?:ޣL0MՂJRR5,vZP0V-xU L-{WL0*7pUŽJQCJ#\=C=+&XU%WU֚UhgWF}Ү`c Vrbj-UR#\!Šmp[MUsrӹ;gn^pfWf_xWN=bU%оURWQ+QvWOŴ% >U(߾I ;]8_}4tG_\I)P\pJtYxB}׭^Z{({cvTn4;0O\,zD &+i2c.2*q^kL0R@ES .CVJ?|O_Η0Cuu[osqٸ8q˲sؿ8_ w; M#E_ޟT)wj 4Twck#ϧP^~˩ X};T{j)샟nb|pjT_Wͅ_>R?vgLL'LEo?:&ǟ>&"5IZlG޽ifW&#- kULO@e~cJ]wNn=u[b2o,$xtǣzy;V{&۹2? wGv P Y$LHՖdޱEFril11Bh}[ɂ򞙜]@z!/kYѵrÜpY1nQ|4ӇLwv7vL͞N;:ңC3MZҋʠ6K&([@Úa&&2h22ڤYBZwK&M|q/ eb*LF~YUun!^n]Kưm75NI{&UKTvk$b;bꀤr|~LB;vi .vō(vō#v.HE9%*t98]Z $k| L+cW@d>A'٧|ʖŏG jCc%$asdy,$&Y-D~%4ψm +m%Mq?Y?h"cBJH&dJck'~+r j~}14n'6Ֆj&~mE}w&#Cr'"2P\N(5'D4ѦR(d QhjSBmKEע^`*>D9RXJIJTZ#c3r#c;]6cP5B5ab᭰˪O⁑ϫ8`:槫ltc"gkk.A1Er NQ(lIm 6Tc:UMm1Z*6w vg\DD׺|@+r#vX&XPwں1j ' vF1⟊Ep%8Uq)(Rۮl[ D lf(s LYDbXqd/(b,ש 97ryr豭XDƈ"NCE%)"o@cV&`"e"@66ED,rlqg:!T[! [&uv1\CM(!ƈ،]GW1:qɱh㢙pq]|GR#FY Ji|r:h1Wpޗ}מ6G+TҘЏ 4lFV̏qSߟv)'=˱~oy~/șPi02!YHF(2I cU"< "T^x  Ye"^'UT48PhŐQwER:MA"mcJ$Sm Ny'dl2,ZYC.H @fBT![7"jFB ܏H#bM>;~mnK][E1J`׎&8Y %G sdkSo֥]"VS $%+|4+T^#iRl:=?(OW'ge:zUcN ŋsM ?"+ZAZ̖sM huuiO+_Y-jrDX,foO/]Wq ND]8KO߿=trӦC OjZ#gִ7:}zU[w1oBAj+W mwzD+ aijcgFz>c8Ig jRϪ}K7Vl1ؙ#͖#ͷss={X5ԇ MnOvm^w[O5pAy ݫn42v|O]Lw9LU%az~Fp+AM"K6W/>2,լ q|6N/UbN/bY}=yVvW:"cOo1 ߗ1 ےQ翕q5$w=oN)_ +qZZa_w޴?yΗ0m _ q]k# 1<u9  zDlFV,]/qr0GoY9fbG)=үY? :7o?ϋWk2Ro}3 +]yU:4a'{nmŒmR ]-SVkeFDɿ3?<ÝX^^ffclL^NALw)WccVl{bMeOۣ7T~n|yqPw&vܫxL:Lx)ZNzFӛ);m7~}ع%WN=h~4cIJJ0g%ZԨ\QJrY%Y@8圚>ۼ],^^g[CK`AM +$9Q yrQ ^ɹ5ʁV|45ZƷ^8zs@CT6 y Te4u,-j(Ciׁ ّFAp{Q+C3bDmD|m8;9J ͆bR1%Y()ET.GDV`$%:5UB!%c`]:]LR2x:dQK࿱ufЀ1t4wյny>_N Sŗ^ =)+ޙOyk˘ds1g>/!!O?/W1MJ}F%u)5VP̋%(!XEthe$䎮\B3 줺p4"|&dɈ!xK/VBb |6LŶ9:$ iTZY/~9K>(TFBU1+.gtֆyXΚsr w쉯 KUY2H>Tx $! Lfq* 6JL"R>#p_eWI"L9œ_zkm!k 4NՊN++*hU@Ҥ$%geA)Xc%9k"3?G׫V"aQDD|%sȻ͛*Y[@O#´].sWcIAwx39VTc8,FZa>;0kp<"K]`jx/$Jy+t zk&Y<\E>u^a')䙂M}ɝ -7 IQA%*tR׹RDEI*sqƍ%$HZ0aJ'`h% 5Ȓ!U\ Ś:mS=5 &@(P 6zbFXI1skTrrh=9Ը6ޔa}KTꨮXgP01TL`dH^ VƠ)BaC)()gHm@#r&۔j`HeaBS9Ɨ*Ș$E~b("w  ,ۮ.-]V@9ke.j6"$r{$M / nP76L'Ƞ QB Ek +Eȩ=b!*|&*&I^GUYN1ɘd$#Q9DFӘI k]A8E $c#e5 tC֞7ۦmUg|&w7ۻ}('~}\306 uZJ?eotw1=r[sGvlBTGSZcK%kBHǶ/z˪AʑZ $_uZY+W]ko7+ ] -q2`,2Nf$ îak[#x%RYv.^qJ!U(΂aL)sPF4Ǭr-X+}(&۝bK5.Ȳ7ɫݼ ^v|7oHxx޴tAlbN&;+HAQ$ ;VsVCjY =1d:ّH mT:ymJtK1KJt%jLrՅPIJVxATJ+g`h0qnFWSBHwLҵ_V|岇A'`y$l0r }e߶|? 8E?t-Y8_x"mOU<}}V1(}vJ|~A vK\?|w/i??&(yh9()cØ@ȱ5(ڕLϚLל`t6RWv~?.|<Dzʌ/mKуCɆ1qUk(p6V+2Bm r\]`I.t<ӵqBٻi6 Ӫ.E[ ˨Xo :k+LDҙɶMwPmfl{Meter^9bW@ҺVdiK2j,U$=pjԓLDYܹmzm 6\lێ]+|a *b9&Ads#"eb dv(~jYYE/"#'vW_L;7f) fߔ,VYD:Dΐ6Tfl2gBso&qD;N&T0hcPSl^As &25?+.m^P-a^ԛSߕhvhO2\M߷y[9]J^Zaٷ+HmiAoFWhyh BI!P{bk'jދ(~; 0^^[V4VB l2rV kq Ȅv w݉e:_Ho߬Go؎i9Vߗ9(ad_~[>DϟϏ7?A)ktou\=㮅=) vRyXqUNEȽ]4__W^u=,?x ζc"m(>ɻp1 EPg͑O߽< %#z["fT3N6,K;{:|lq4s@^99̯뜳*q}NW%06l5l>p{t;|VG-ξܓu?=j_  .+G iڴڳUB*mn:lK`Ȓsß/|xn2]5J6[]wtO?;/F*2 Iy:i{ZԶt?ݧ6ɓn z/,ii쀞SWUԘ {M 1eNO.>6əZf yWёݜ5$/2Ũo(eD(BFYu* uf-p8&z!%uz." s?R2hht~P9k@<hFA{Mbn>}?O{k}ѽIy/ܾμą՝wC燃mF uÞ0zit M( fbs,G ;cU;nBՒn3gqǢB bdvA Xd~3X`b%LFYEvڨ9K)I8HV^ ]ktEM:[K3)ȭZKl}AU`U$UE,,0&-4c?|%`O&-]gǸv/䶍Хqdz➍.a9,j l8 6FB=ۇOST(Y)[Y,`*X FhPH^|-Q:`!VGUe1A[soY_fJ}}틵?p =|$fiI38\xRuaϩ(U+k+\cRjd]'B'N!Փ QH$(1BM Zˢ*E$qJ"XB#U%+4ԨB޳7@q讝gX%M!edr▃GDĹ.Sw e?W26._O߷R7rypcz{,ސl^}jAAB/WW/Wֈ^ϫIi. g[t/]A\Ŕlt?c͍%t[ˁMK=5oF`V6},-ʡ-m:%閫 @-G+gLEz!.vs'p|:l.`8ֵ-Z c鯓NǑ [Whk~ {k/t? uKHqߠږt[b{ \Q]\Hy\w;|p7yI[#)oAJоX]w_)NDEzZ+k*IY-ZGiYFMZɿʟ?̗")\# 0K"jYlؖZXOU5J"jskrH!dQM&Mfc9RL$}"$PFZ,(bJYq֪%n,:m件VoQ4Rl;lʓRZWJ,PJ,$Q Ukt &YCu>Uyvtt6m'WRpNJ[WR8ʪ HGIQCMB4=r ]Il X Y ۀщU9ё$&Ň)Fa]v}vnh e@G}]rOevPhs軅0i6 MP!TT2t>qDêV*"8l LԦ \B8bwnu7btG}Oޗar»{^ѮUY-O!Ec}J\crXn7{EͪX:QR%rX3FωPQf̶hſE` 2E%${g!M7J??"q(̐Qb'i4.f֞DvKU"\le jKˍQc& s"`БBV= ^]..9Ɋa I%CL D*>a!rHa%'d׬B9+`Cۊ*եqx(U$JR(jl %p2Ɛ'-cPmV:XN};:7h CKbHP-溝-Wh'$Tl_3+p A!( 0皒l:Z\9 Eѕ^!(֨swC [1N[P\ *) ZF dg[M(mq7NaHaUX="z]B HM2mj@oY?  co 1C ;[A >V7@Į@B9&՝ yED/i `I*- 0FVJ (ޔ=b8!f+,X TѠ#Kyh)(`~E_n%RZ+uE0=(e% ٗmvM+!h:UKuZ"4H ՐFDYl$DO;09x`W_&(M2:nPU@Hv= z)mŬR@p|1F(P`NR'_0:Н X5 .>l8@* ]_}ph t -ELpE A)J H$DL˄ȫV4(Lt_Xra@[) />$/NVsx^ĭ |t`unP$ 8U(U U؝ T&33&(dhz*o6#Nr8XXRɷJ 0 X(;Kb R;AyH!p,xܗ WLEUR"d2kz(%&rp1vh1C-'xJKuBB@PH*N;כކAiINPH(*X?A "8b;7 ""$åc!`Uf8ᣇ@zp"Eִ1H7[;RřqDC(8 2 o%6JE"SY"I(B7ZcAZZ_nI l'  yp7n8-`)C2(KFKfc5#sEwFqJ_U=M[60IsJd$" ԥ" d;Eq𐬆j#c`aۆoi \ۑ%tebuVg,-Bf ƨl㾃l~҂ʌ6 ByzTz;0!00 C/-1'0J9+dv{ U"O+ B QTD1Sqrb  hFB jyJ@ 9_f(+aQÂ\$$Q%AdaͽV@i,x (\Տ="mg! x$=96eۣ-YH 1V(ك0'M` &mHp"=T2^)]s4\S29ff,+CMW&d6HnϡT<%D7K@Phj(f ,xBZ3@z++B?)qCw#Ѝ1>DPq,m5l#zc4FV묋gQ[LuwD&%!Ix!mv]b;(0 zi5:@8/ST]% ᝈFi*[H5?LOzЯyX2 1q*+%yp$B4!%DrJ`wFkUQbt%.rk7k |}t-k~࢒п +{8$@`{O)yIHs$@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H=_F!@"U~8$лH] $%$! (Q" $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $г%V *Zq($Xkzå@Bjz$3($BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$~K$pH 0W!Z+v.B@ZBI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $B@Wփ7@Reiyzy}Z^\?]P3G%7Q"~lZXQ.4ʇ4 ?c)QO{BeTZ.1qϋC.#rk^_3O077~R` eh }v? W;i)Aɾ6^퉎G/HSv'Q,Q*FK+sBJ5GgiN/<Z>rr6dY1Ffwm̹~YS.viIgə /!$; Nn6FӍY,o!4> JbɇYA&Yl!7uӳ2"͢SYi@ GWvKh}ӛSNOyrW J`t GL`7{@w4ڭo-i[n͍֪ͧUGpG-5:yZ7 w~j>z2<)&[Ed#IuX6ifC3?Sazڪ]%y] q7!i\[s11iUMi}@I%)KV /{ZTD*9'Ƚ% KR!A {a|Qp<-@U6g $ŚA:FkSW$P|)Yےjt~Z}}ӷ RϮ Q*Z9si)YQJYJciE]n߻-Ov7m۲Ɏ8mɣM%/RReOr2N+0!#Rl5iu䩍[yj͜G'O3Վo^[ɌO;\'v6o-tTIDSʀ2 .6H t0$Jd#)EM>!| 6{7ީz$tOSuWSn2DgR8tcbW"5'oKێofsŤԞqP.iT@D-VU^ G|g(H_v"^XjՈNJjcG5)[nmPd}BP!rZAhCC2G۪0jcQk,z}RHn i#m}j 5T\9Rڑ(F['f`w(}Hh4=D```s9?D`Z)=XRL 1-HmEj.A1O.ەϣ5>}-Dmlq`3I2N"3 Qۃ$eѣ%eMR۞ʖZ efe)ɑR(*8/XJr? f˩Հ+[ h[3gGa](5Cѽ>MVpn|кZy5,?fd>GUV ڤܒVA0lhs$1K&.m9*/ xg u%.IVg>QM`&'GfL4kf-xh8MǷY6ؖžh ̖Z,kW\5kr\BR+v(*iYBH!$ۻQ"2(Ϡfhb3L;BTH`. #dP$*@mvFjL0Tu 6P|yetGᄌ>)$Es5snSOiBZz+@c_($yǖYel2mY,bP>^l3TiR}oJ29pVU\.Ш`ymEpQ g$o$W;hg?:0zEDJjs_}}iCR숕AJp*Tr\s@IJp]ɞ+)޵q$eO Q]M@8wFKi%vI::{ggW$}h$WUUw=2,\RS ,^+1:k-xFUBucDD FB%%uky/o0Gыw`6Mc41Z&)~hl -k.|]=;" Dy@GX2/Q2jJ \V!b[ʒȨs9I2†"ejj^h( HRRwE$@^7lc9{-/ Kyhㆎg]8=y,>;7Ug[PP=&\g*m8J 쎩-Գ~N^K:ۛȭ-a_;jm1o,ۜ(ͣ;?-GolchO'nm|ck#C<<ϼ~L>&.c!pVɡ¿[Ys6qP"v7<-G1Ǘlj_F|s#攖.#M{Lf|iX\ATRh/vW8OHQ9fVK Z&R@Kc g5 yU{fKP|̦FO)S_ ShDLM )95u,NGGGfӶam߀6E=J-V2Vy(XJ'SK"QhS) D2( j)!4JSR1X\*>D9a)l\d$* R9 qơXcAie7_sz~r:w/r@Gg=>2bY40A1EBGimŒEoYf`K¶a 2Tg:UMm],$- Emub+rv#vk.池v38uc#jV;3휏Q&TAJ@UARP$RXUS:v 2^t1֤,UY1il82F $"K`vԺXX3rvaԯT`<;aDqJ] ]R6f=V1d+=B_4("t*(Uޱ4*>s1@ ϱ,WJ_Aa|LOˮqsߞjXOb{ٗΡ_>¹rGHi c*MH#>=ɴwhra`]`ð !au1aY$2L 1{v#K~'t- ܠ_}|aU@ë/{k]C{V%q!\9ŷΗ0d^}?`Mjb ;{9pIr N>=^M&d⠋"[ʝr6h&bXG)=DFNywtl%W&~Xѻ.yVzJ ˺PJ-Mn,$Ԧ)J.OZ,S . ]QO_?<_,N~}5\y)OUT?'XhlrINޣmexY)K{VPұ/?%4MёO ?ouf#"٫Ww5\lY{oэ+|?|~z5~0IV hJ{F'1i?f*zd{u~j)>wn\s+-v8Cwp[irA1N_gBIIY nGQPrF[T &Td QCm6G9__P!wUސ3,X IBQi yrp䰖"s э*[n_z%޶vc1NӷB~Ҙ1d:czؿ.oºy<gT@υ]7YߗUrg?he8Kf!f,ou/~">ߋZ+RJbZ~,=eVhJ8ȲTW7Y'Ч>;Y& ?{>>/TC(2Hacʓr|{֖@=,au~OK_d9p4V/juw\Q.g%LrrޟjNj Z#&̍qy>́}D-ݼ?Nϔ%޷R чjPHfivmԁK?GW@Q'ѨSw& e>򙳡et Q)&!yT +>K$ًҩ,\pѢs2dX%XZg2ϴ)T/E3غm3rdz4Ud.7/)pW =^WyiXkMTW[[UG({)MJ}vJRr|Jy$`UIVj@šLrL@3 E=AchuL,jdB֗_Ē$mBF!kBvJ+^ΒsP U@yymsvhmZ׍9{Y/W lu䴅Qk#^YF$} B!$DbеC},#pP ֎)ɗQZ1@N?1 1qY+7>GTKRXoU3jr$hj3tF?i-G6BF{ Z\XWZ5KY4hM"ZA7w^&n x]+ZT[WSVx%EtkxDǫ͋*Y[@nGi箆>&XPSoUa^i`N%XU<"KU`kx-(cmUfrtҡMEUN*:xqxސQr16ibq҈&l7 IQA%*tR9EI*LBRԑ I'%$@Z0}]J'`J67ԓ5Ȓ#Oⱉ:2z#*$NOdR&'jc}&eѼ#&fa\9Gm, j#1߆wm/I:Rl[;ld t4t/nƬ/ŨZu^w*֛n*ϪUhv}&>{:eBӭ*T=.^m3|}&W0Eq(ׄ3bb 3.KM3[y`#sTjy< &f5:{3|98S(Z؀\7ʍ`%jƿ'Wqٞ#t_ͬf{]GEq:M0niLf+z ؜Űws=}S̼5ni՝8;6bk'5@jr*iPvCx@se<{{sy=۩<{nsy=Ͻ{7p'd<{{sy=ϽQI,y{{sy=Ͻ>_UCX0hЪ: v>5F(38dЪw&|r-ʵ\'< J@pp56(7cL 1'rHY"!X(:Jr2{9u'dn>>M-)놝j,`I0DţFVͨJ8+4&XJ8h) se5EAe*GZ@>d8~4 ф(f\|G N&~)qHKvݢ[9-4INRLOJN}$41O?Lƃ.(zkWQ'ͭt!p'dwMy9Tx4g&'if8:z$bD3)r!91TzR(6ΤJ 2=9w1!qz7Hmn|kt *J A42 |bXXL3BV YkrwW]>lSgf=`jl#6.:Gs>59IpS*bYΩLQ'ݐ{r*v8˰ɕJ8%fM#&&lV%FbGl7%橠v18ya=jN?g\k= h!)-LL&;$Ɖ KJc@ Px\$Se=XRwFRB U1C$^P4t p!@„iu'N4I f @"2Hj P/ T)8uUz` Gf{K,2\Є3I\;iQ`Z$|BD hw"g9+qFA%Ec!L M! lTQ)WN1$ 0~?VlcLM;NREL>\ ۠^To_f>]e6:2!&X .eo(B9^ `$m2mY\l4Q =FF ?,j""4uUmu4yj'Ehb'L9ޟS/fJl>L9 'ݕ!~zb[,S`6Z!4>2X[ℭk,09;"}DQl\;~l|7Ogkz8Hø:?n飧> q41{jE l_Mܿ9\|$*PhҢ*džVAv`Urj+"/`OTr%#p9v|{z4߾^\;{8]0\^.=kęz<:rQh6LqN# Nz'?P~DѾ)x:]~~㸋j}6V;] Vޞs=NB7eYAj} So]?Bds'i ˜Q⿧tB[$iw$].{< gObGݻ|d%&UwD풴N5v,6st}wۂ,R{luSR{ՈueX&fmy-,%>xjI߾޺Svec(jL?ۅcY y$yeL 9FM"4gPsH|}̡Bg(vT^ 5x-ьM!I7 @t:P.Z)cdkuzE@/kOoY/&'sn|x'U|u:}kPߘ\7t[ߖ fMuӐG 6B.rpʟ-8{f?-v[O?GC;TH7RO  xD6(b^dTQ"g\^}cm5h|ol[sK<]_Q5ryaBl W^v rl~GYBҳ3g՘9][f,\ [Tjc_b9PwL޵#lۼoM; p60xu"KK^dz8u-YZ3$V"Y,~U$ +-maO1`ĐAA9}7R1 hKM.!Z^:l))a -ZJ^:mtf5!˂|Ih29?J@AdeiQ%[hlmM YsץA?1S5ӗv|0`J@詫h/&@4.0$Zǽ ,KA(Gpta5VȂ{LU9s(vy߶͞1 izgP.K) x5A ȒiKĕ:V"tIO֫ī]y5kvVhn1۠!H2.滴>pjӊK< Nm.e 빱1$SIIψb)ÔTӚF<0zh !yla6|Ĩy̓GrA?O/-Z湋 W'8!>#"!*x.2Orho|Fl=/i~]x{QǫuB+#2߭ m&ަLD-Q&IO(uH*DkZusPѺAahjb(Cd(ʤ P$@ӗ=g5.䩞eja e~gߗdX5TwhQ[%P!@Fp-2\K$X=@38;Vy =V {{#&*o8s,9T&V8 Qz"Z+RPBL Zߋ{]-M1YM,Nr:y R,T$c41PY 7VN':Wi6q3ټ -֣!M67N7髴L/.;ݍEMϿzM".=}bE u)u&EEg;xђJXN_ʓapm}Y &u\OS$L?%'xLW4NFg rNɅiR?ArIrl$CqORNE+>-.L !^E{UV:_6oE/Uo񾸶!O*Ktvv:> 'cRawpWz iR&MZq4GZ̵ivQÌcg al"̕n[n};(BUz;CIvc-&A֔oa=Uբjj]Xͬ|9v_٨~W9^~уipY\X+#k7rQʈI2ȑ8#bET)=E?cU.+TN| Dž\_:Ocw>~:G_ߟ:?a ;o'D`ύ_WZЦUs T-lWW&GLʡZM ο~ :PݏY5ݟ4]5+Rgs Ei~U ?*o0# ш!  G=z?$zy>"y@FkAPYugSq4ԷҮ#cwL&æfpW{.2Y`bqBK!G!nMx/uSАxPV;ldc:PJsW1=3L9ävD"MHWSz,"Lz,L 6]+J cytjzꕗwwC؄W$Rz2h{d+L׼ck&nLz@i3iCi37;fTS )- JIisT{P G O)6b).` VSBٛ2L,h5T"}N imLC97?ȭet4No: UcK|mS=4Wޕ^Ewyw7`F$#I!PzBui K] $$\4J_l42Zvhh6hښ"uI{ST)ƄJ4M=ߘ8'q0oaZg1yvk {Ă-+@~aheezoK{Oo.$ɳF %HJK-)+'Sh]-]Hvp.$?  )DV:5)O#Kf(>LH<4^'%HlO,3j@G oS|9E X yj6&td ry`rpǙ<"yq }- lIXkˮ.-mƲ0v,\ ū{«3* 2%4Thx%K9v@[Am %'%V$cLZB@w3FP!" "]Q(LU1HL4 PDB0*Qrbg F8 Yq͇U麼ʈ( ]:9 $#]5VA`S(iy;ZD5MC_οK'7}gZ'}{Muf<7ߗo.z.)OĬSr?/U- \ z88*4[6s d4G$u*,¡OqJC;@[ LFIiC"t[o˭V(V }Y#v^(&lL2Gx#troTP9CKÞ:9|(2LqlفHY: |a9rmY1wr$; wl!seOa׽٢FJĴc/z8'?)98\a=D6$L9w%X#Kcv?6T뤯vYOžؓ4:.XCn]ZFpA{^Zx 1AxD!2{1^D8ǞHz?qxx8/c1B բbE;wW 񽼤bExRN ~EC_͗޷^1cr{h߻ݛfOkqLܙ}pm񞠱$GZ7gvn .4dL^8F{\R2B[HkJi4HT46A]H 5wV9ocY5.e 9Ł$UR$2AQ9SRXב.ZrB k9޸<;lO4=}#R-,-#T't™21p>3~ ;R;BH <ÛnH{f6/";Y 2ӫ^kfE4JM{W޹h,)[ݮ*'l6PS9 k:ǔ'tXBYmDž6]d)gAmC zDRѽ )g: ε.~l`v |->taneDyL*XV,-C_]^*BGcҚQJEtk^*Ge\eq8s5JIgc՘+A19(PhU+C7WYJ֢h$O:$lH݇o7SEoq18MJ0|'Vi-BǬ6:fM* )WhX *+iQ*' Rm}ѥVRGHҊR*.q8o(ev˯;%V=]yYgL,KR@TtKr:obtc}`#LCYS|7:k8 !@c$hF"B 5Ha9<nl%( 2\W%)Ĩ5cw:A 100Ic_x0繀exOW$?j~4)zLlhN@ڼ9!ӌ%-!8 *DDf lkF s@oc*HwxpV[8QȄQ4(C Ҙ8+f6x AiW>S $Qot "'8)RF=& XLAh(~0IjA :YÒFi=q1ʹJ&\M"?{q>%8ԏ!)lE1ȈeveU.e\cxWUUܠwj{UmY!XW TS4$Lk[r ^YԌa68fB`i2X~O!kuK'tmuVIfa1hHj)1+gkjWX@˛7Ze=4JRJRT mj}z J!U*gk ϙ˔L,M*:)d:8 (Yd,cDU+lnW6x-5#W6ӫm(ĸ'%ة'YxkE%ۭX艬##Jwmhlv6XI_U-/U瓾 ^4Ea*DTFk4?xEdIY1J:gqUZoIPX#z/?_5dR4U o ?zOd9P+ukz}ܴ6Nz}LYƇZ@.ZDGQEj)f=b jeE2KD$, 4VKVH.=rDڠI S* ˬܥ=+K@w qYdN&Itkㅮ6Xw]7B!Hɥr2{C^%#"GS yM$FpaVK^<4gR~Pbe Ki5I@v!l%]+^b"4!'!)+ZZDM( k]PӻIvKfi<.$=sgv_(|rf[>s";z;oMfuPROκC3~elKOO}{g<=Caf&Rl3pf42潱cΘT-!'{ʊ6fAO 6Xebv XN IQs-s j#c5rv8 PV8c_,P XX|X/o'i[88 ?ot#69Y*F֚a9!:d&)%J{ 4 (^HQ`Sj4x rw3v̺,%p:Y0v!ZlGl?PPwڲ2j vB4ƅ#ڀMzފ5&z"Bf$fkR%9Wdr /:TcbA$42 Ud$0xXx9$c}*#" 8 ƃĢ6rySyFe%aCUDtP@19;?.>XH22rY\3 P*#b5r#DS m/\YKEUՀ.nx_ʊجS{6s2`n'+$C.ZaQ` x(xX;C]~x̖v{_ӏY:t/b`VZ 9 w&p%xckJus;kGY޶yׂME%S C4Q\#ٗNavJ:.X_ ďEgqߟ$|cCEw֦_:¹0#8OKˌ 3 eadFEȠ5Ad9RZ 0Pxg`x`@-$i#2p2, h+"`J`99}AzexU`!F0)ȣ`%۝pOIpJ ;ύ "2 *C5rvv߬ +Yfʔ^NɒEttoj")m 5\=v0ʁGk,.y}IC@`m X Vz]9LC6B{dvY =|mȣCA)щ.S/ $dW$.s8ZZB(Wdy O11w#A'F/^TF$D.,+Jl29dF:G'Qa8chQw2j{v'~;԰d/"}ϻtާWィΚ &"uM1.i9יRBjDoޡ;Vm94onFӧ7=2k"+6$"GOWUE3jμML/Noo{1*<߅zHc{U%h%!+ "cpt.Ť0m9Y"tC7}w/<@Q3ayo]Ğ,_*^LSs2B9=N?l m\wAꛕ%WN.a`}m`}5!ak1a^$2m +17_0 JvWV$m?kVBOOr iZ|=.Jiݖ긓tI'㳦EW=uc#~w }(k y6Kc[1xti7XDq SGB h?|Q:{?O^Ɉ}}iAEOҸu18Q??&QsmVydl= -%O<a=D`x:}Q*G?_ONL?Չig\ׯ~.taY<76x n__e/~u" M^={Cwn-'GN1J^I#w~.ދJVjEGf(+o6Iu*9>ud ?{aZQ oO!!=MBOo;W惼{zvU,T w5o淺_ fQ.5Z; (9QD_!M;O^AƟg3,~쌎u檒os?Nsk5sſj*32-&'Z0*[՚`Axlot^k.qDQ1wWNgMMrz|.MqcلMqyZcP?[ aC,973+m Lky:HZbypސ aT,U*Vw~<&g^ PIa2X9R1(,aSБ% FB.@1MBԨ4MR&d :AAǹ <6jFeCfսK5+"闓LG/tw7z^) !y}^1nO 쒲GvR\vӱ\gLi&Fb@+5J@!ǏArIܨ&,ѤUԆ ْv9:!dy.$<(-t=,4B /%N m ijc:fMX} ,gQ:+F-GpAv:D%N H(d$)EY&"1B)?]D\{;?LTS&1:rR8?gǙvZŰH)LzV~'-{b / @h*F1eҕEC9ȑ tgPn*qXa=xb+Yw/xӊdɊJH 4h ,hԸwXkhiP9C3AꭈmԖ8R胩\E>QYYFvɐdz.9KE%,VX3q$9(=U4lz4CVCYY:qbZkI5Y;vΐ(!!M5ՃMO݈$\E˄޷MOs=(oHb댂6X+P%'PЛ]c!Nv;kRŮ"tb-[cZ:SÔL#Jk9 |8,}2 *55QF2)MLkMɻ毺ZJeqJαɧh8+L3^4_ w8GM|n~?Xҟ೻>}c^4X}˽%!dk>Qk̤[hg5JL*˥fR]m/'W0RV [˴m!lj΄TLM6 GE/y EtA1GNNkg9y6m# ()3l8]`6a!8lEj|ef}g}Jz*/Lf%Oݔ!@~Ap9FfIٻ6n,WXf|hpjӊK<91['Rh6 Aznl $ TR$2A3⩣FkX9 ôƪx`: )Bs|c6|ʾz惠k&:Y? 4R.R4 nBkCt DnM%B0#pyU54V5,|,SVFA Ո-LG ă4P!@FW.E |OVy8;3y X^GQ4z4y֟bB|Vh[ft?vnhgvM&K-L r39ycGiMFW5\JŠzq;+`2Х@^" ^g%(^]w4A^Ԓ-y|O,|>,t)7\w>ûO~O>Ӈ&Qn{~~}V?hAz4>ohaG&[=+YM܇#=?[kc痛o9TwCOt5ryN}e*9>:~ZD]֊Be1~@CZūbݡM{@I\󜪩"$b^m:;n3\;+ O޽Fqvi%6nlʻ7= <<.>#r4F=IFI1ĵ͠5HH \4J_l "9<`Xj#jk%NR^rNRt޽ո.s*^ ZɭUeV먮lNKG])ޛt ۟&wo/s0z=0ެ ;+Zn%xJ^s=䝳ͮnm懃۱$W)x}/KRy[Ms٭ 1=\9EߚrrPs. `r#a!)l>ss)OuշX)DJJ_]~75͌T1i9J "~Nd_G}י;ٔa/A5t4hj6Y^}Ig6ĭLtkRV.a4sji'X[.{mC%IƖ`)ہr_NLږ~@o@Zxf+uVb엉^z`Ԗ6,ȼ4 7I)k glRA0LJa Oq͍N6p/3ЄI^9܂v5naNaA 'DD.p'bms'b-"IQNfl =y3N嗉FVsW_1jB; *Rq]ؐoS @ŗ5// t5²7^2׏+K-hKX4Y˵pACZxΌa$с>ipTȬ`kg UÛNF7ӹb݃kYA졠sjY`4ZbM\˯BM,1"VɫT8Tpb% ,\\pNDeyH]p{$%y#rr!(\Xύ$!JʘT&H&_$:j>1ASXÇ&8BWK#+a+[4stoe9s3?h)G O  U~ ?KYB_ U~/T*PB_S%q! e|/2PB_([*쒔]KRvI.I%)$eH@Ik.%䕔\ҚKZsIk.!#!1]Br]KxY`,REjH "5X`,REjp;6TP!`AEjH AQ4(`,REjH  R`,RW`,REje@Θ'|Z;ӪiV[#N#5ZrOEjpWI 6'6jb][;o\@s]Uibieisݲr''[ NZ NJoljê^9QX* ɑ|9`^b:"!f8蘆46KKS^jhAZ!A&'u<3XD0*'Ӝo&m:<&Y= 7_z)n<+)%.|o]~#K=K:Iq^." /ɘR[B2"b ˢ$$xnc[ݽ.~v0pQp>i&S̄F ڋг㣌052%r8b8?;&a Z)C%cTZJ97gUgG?_jC~fx (R&R$N* ɢB:@T#<54f(H.T\IM/aӁzqOFMSBEd_0у -?%C4Rʼi `OEITPPˏ'4w+..s\Q 8ڃ"JmL8WJt|=3p;0ehd3[SbKd9+v/j {VxӊOJ%NrIpfb/Z'^k4L&ZaocRbL5"uUα=j5f":kf*;IPI&05mXYʛ)p U}A{4`lNhBްwoqf'Vo9Y𖪾eNҍsu f %7Dq3|dz 7 62W&J2RRXSRZY ݖ=M8 !@c$8D%) Dj b Ɯ=gښȍ_$uv$/ڇɦjwO6.WH)R!%ۊk{ I#),z0=UD4v4#@qJ+AdZ/%UxLi8shvPlSWv 6 y8ɔd18!M43^BbLL™)`.* әR;u$߁ %8pAJLLB{Kf`LF5O78[Fo쓦\S4D9zUv,$2`sQ g OTaf6h  9,Ԯvy7cfm]@2 6A*mK2<YH9lȴF:r _RWĨʆ/D"K+F\mt t»JI7 &kL1?l!'+glۣ/ݦ|M.'Uɑ%ѠESwy$&+ /j`=uK'hOǩ3X|T/N2hPi_7t+ID:q22_|6$O<»?v~7u{>~,C#hkc$*&i&ʒIϸ{ge~|">r-!Ynк< ʇTuT;Kڕ5 15mwC#}~qo~LMzXjE-3q>&7פOXGru|t71^-U o`=ǹr=q5b MIѿL; XͨZ5֏LHݻTnrP(Rl@)(T ʀ"(ddRuRPʖEVrhm Z])Zhw7lb}>y?f0xp482"r٩ nɋ&0ϙ/a}I1&pC=&ךW1pL*X!tNq&KAb鬉ݒYfw+mæNn?M[T(}B/Y7).%*vyU,㞼3Mp3p~=ÃhPx<}7/p&3KR7fd`DK8 .M.١fǮxhC>-K侞_f*?+6݂w`?>QjϨbnϦbnR1H+bnRCܯb.W%dIr9Yۛ%߳S89_+!b@:q12-?b^?7_vSq?1xNΆ TQyf2fǂ6+P܇+ruS԰tD +P ̒dJ̅7N& UKAْ> 5-ﯛEiX;:BO4m0xqٵ[u.D.Dz7M?煰'|@NsZ:f:V T܀&dh T!H8#Wu.YDxD(syjk02$Py"I@AJ2)2r.3l1 A[ME+9'xMS An* "b``L-R[0wz8%)Vod0t=y vP)/\mL}j10,>kBɚB`]B ,AE*fT3 Lgg=s@/ yfI2%NTx΅N$ǀ+6hyAb˼?Az+i ERtIImN`6gѰ{XaQۦEnk a8chP2Fa԰#^ċ.>tø`)j.IPy4>87.T` 0x߅I;}tk&-_*S*h\'\W+@RI:ܬcE2\n{5)sa4dHc{úi D|Ba9Y 1V7ԭ`EC_'ǫYwm8q5 I=gDr+W֤zmi/ Cr_8\WT.rzqAa,-ftO۰V書mk9- Bde#-?b[&NeSR^.U+Ԯ.x^8-KT_|iV-<( ̳yU@޳osҌl?K8ٷ0٪M.T ^6kQBVLy"T~_~~(q2wSϧpt|~#6KWկ:yG8is~ xbQKORO'խAB@b0Z)y  14WߤH7//=>]O~}~iox$_눬 ?"/(8Ӝ+R`OE}.}>v!24ُIG![ *w[<]^]%W?AR53'sEh]UZ]g٩d᜷<ݕvb6zۢ~wEjwvzmyzoɵ<)o{ƲD-K{ k:\;Æ[V"CgN;rʙG}n7:boG 57>څ{U遯pQ*Ǔ )ZwLw~[/iuR|cF$1te#{xGt?1WSi'Y7unsg/n%1K|t !f d"椐Mey)d`s A`㠆)3vdY)(S28*$h=ѳ8t^rM/_1D"kZRhӫUoNEYrVrpE[QJ$=w$p#BΑq3p{%- 9&DK1u2k'T~E$[7 k̨)PWeo HhUfk=V E4M"/>rQAYS.Eg$9BJRb2)D@/8ZRV0eϯ'xjiFǠT+ hˊ#%HKx`u :E. TEXv qBrUt@~,J͑!>9}q[X_O=u#2*d`tAr-+t2z~|tUvkߵ ݍK=u:R:4}݃Qx`}{ ?yÅ`bsŊ`ߎ7J`K&z7K5OP# $t30YvoSwYY{~okl@*J:pZv^kI>9cғO~^aRa|>ߑByBbeI] J1GeEr XvR){Ca~IBsZ (f\ObKܲN5=)9 dS.*獯 ώO*-:Ӽ̼lggR^w"v߉*'F.>8< h:!eM $eZVʎ6Ƥ4;Bc4%-ZJkI3zeQL\q!@acIFuT6 gɞҭ?|ȾuFۛ{g`o߬GC u2MSlWy}qg6=v<[t9O"SP[?RuIy<[ 4BC-f7 K_L1+vu*Ѐ=Jգ̐Qh ;ʦ#B"kG/Q c *"%K,`u76 gG}HǁA^/#a(싡g#ԬȉXZ.'⭖7g/@aۄU×e (!)^g#L&2D޳3MG jlTm[lg8j|Cggw=nCزJw'k/006˭uĦpOm;B+h ["tR8THkW1:I6d )"ݞY \T =|Hٗe+F) `aj͆*Ͱ Uc,T#{uijo',oQ͋wt6]leӓgtRFo[k> )B+!Ik',Kq 1ڒ} 4Tg:FUMm]-d- @lITDriDlGx2˗\CAfc_֍Q[=Z)iW⟋uj%zUPzJbm-XaH`-C E'M&HV⡱H5ϖ"Cl͆{~x0vl/"BcDGDm{$R$v FD%sp)@l|c+ TB .@NMM1x)jXRq&<9M>R7לF'!Zg+i6h/_ud\\z^gU/.ƸhF\qXBKQ\R.9YW;[A}Mlm*1dw"[|p\ձ/xh{-{S-??Zt[Z-i=q~|$vx_dCU SboN湾eL͐ O~dYsy#_?cz}2wt>OX_=,9iƺ?|{:wLwKg,Ȭ!`)H25=ђ_@ }f(P͐0X#nСUKk9iyJ. a@u4qy׽3Yv~4y;Y0|>[,0&euIa؛p jW^w'PkZ!`05.qjF{! fj{I}B (`;r6N#G hh;E;f/n4I7(d:/ɪ"^ (%Kom9 T S>fš9hPHB`kbϛQUx"BMfّ>:Vz+CbIZ8ޞcjw{͂Wǵ3X-z駲y}伦NdPh #!+ ʣLPdƔZ@[5" B]2aGS( 謊 `DV /B1礣qrLO$=,8"!5>f|z[rzw3d90" xk,XUVP(:1#)Pa)y |I )xNg/=gD yL(0#Rp#d|*qE k0Ba(yiZoתBp<{di J%j:`QPSyW$ :ٕ1[1J};OB^5-6G áCQFQk8n 7ld/*C$ƕD[lbA D2lR"mJcE0h#<äm2ۙI+o9L&Rd+k7Eduh^1z2p&SmguZ&yn/X7'7V ѥ:l仞=218;ߵГN^ݽ%%v&O:?YG_m_beH_9\  ig?B_Y(t?=PZo60XkW(^*Y+G⟳+/҉>~/p~Yzª&]CUVمһ4n=;? un+n~LoI1ξVQ?K/tGo}M&hK]Hr 6L0Sz"̑G &&f5Ffgp5r2}rW//&_wU:I7/QNY֮f5lre&1&s];e$) .!_QSJGÓOg_^ מl|WEcb*7:"ԉwy]E dxRŞtB~xu;O4Z?x-倶HϷd~[e߷Ż5?P/x;x$ /%zQj (RtfN!dkzM@T~tv~z0#ZO=qx[3=3=\sjƇ)Ci\e4wFG{L1;w`u^PI 9:bu.vXP}'ѨSw&[S2B` e]b D![%dTd ȁ}mBQYsS#THJ٢`KcS fAc Ze-Zgm6*0]EfsҶ<#߰^z:)t]WyjN6A_ ƣ **W5"KYE|\Q<ډ;rVȾҡK7AU&nMɚqFJoHTɊ0Ml՜lr-bmz#J*#̥P|KS9=%E8%E%E(=EPRXbp\BDaln('k@b1ՄMt*S IAg{8W$/dm|A5~JD:υfB)jc.H(;iYm\D"F! tAt 2]rHB"l z$M%=I]u2TiP&JANA/P|PζuuKcbqJ;Ou>Lp YZeCv{Tڄ`I3A:,*W(7}ߞFc! -#)Nh+ T '-^-{^ VC[YAԪIhи:pQf[Rh^%e|)$5 ʊJc.(]ԶtrPjwޣ|LG).ѓtOY99H(S<!1s9B#M4!xiXZH\ mA,ME 1AxGiװV1rȇ!Rh|K+{ΙV6dv%tCvF25/`-<2.zpl#(Y#<&ƃ^7GeA*Ly.Y(I`2Rqk)pLL09u0|Drmf3\]XItZҮsw7R%(sJJM$khyѤLBhSX Ѿ4 r~k~_m. ;*c`|f!VFeX~'UȺO_uZHX MN+0(Pllߝs \*o&\6_ z<Ӷ8[LPs|0zY8ѩ\^1(O>Bީ++ ^[1~p%,(d( )WhX */iQ**r曆qfFB9nEnvz/?/j:-M>:L3\4uYqj'ŠPѽoUsϪ14&#8N@= ro,T$c4ï/b@,n6e1q(C4y{[GQC A pQ `Bl `]/7?ci^-_pW^U L vopo{F1Qĝ6M9fa<>~wuyݻYI2Ia%<`>8JQ0tq:**tFa + r:C9S9i*;K׻' ^vbϔӲٿZ ْr'HtBZ~rfNn өM|7k)ٜmяוv"C[m$Y™}5inxr9\2)y6{gLJs<;~ZC3Jχ68ǝ/ff>Kc.jT%oW7blN̥^켞[nnvAAKդ{sn+ d]O =]놭Fѵ,.1=jPgY'߻MewipJ핑ͽuceD[&{8ܱs*A*-1bP_cUnjTnۯ r׿?o}}{(Ýo_ $|t~y@Ow-hS]cmMz:[+YC^] L[ ]??[+C}w\;&BU7Լ(l?a l~Y_|uDž6QQU8,P/C}Aq@la6Hh$ _? ZZ5T"h-h ;s 2q4[9i ٖwQQ;E)ks&FG!$Rqk{6ʊ^{ ؚJ>vܑwVfcS=7ɯ W<-20V >GD?e"~NuwG/U:-srWG[;~p*x9`fEmH1Fq`św?Jݳ9)n{(fbw1o'w \ JcsRn֣N<^0jBמD[Z̞ʗ"d45usebI(Pz0qPOVT6/׋߹7h}{X<暍:0fwpk R|Gi+>(5JT;TKާ]P*&RˬY*g0FJ731B%A~Ih29[%u}22K4ɨfSPK䛪@#9%3řP 8Ob% zְ1rV8M 9CW-#" >+1ߜk ߭|Kcvh =|(YBx<ޤD븷葒jEuK8-6¼S]2AXuf lK(vIҹ߶7@ljI^=p^⨄^_7 %,x#K5/W:X$1āiH5/- 7g0IE>pmysZVnHl\8scc IaT&H@xFr5 X9 aZ1rF!!G&\;/Ǹ2`l%!|AnDO"癠7s<-Zy>5t~}~AؤmR̩fJR LKA;EHubͅEԘhi!|!yc;ţ%A))-sJ EخAJ#L%N fc7L,h5TҖ60Bl-Fr8|JThxrGOS"ÉfmrJ{59,( ߣIFI!RMJTܥ@BLXy9H ݅HN;kd44ohښ"uI{ST)Ƅ"S%Ol{"ag}|< &i߼ן8šX#M)*r\J$Yh) rA}9n#\_HVCwT.F)nF4n#wpX!DzG19oK/ɳlH %HJK-)+'Sh5^;xxxxx }v!&id eAX':l B׉y Fe,3j@0SPeqq,|`VOb<5,/7F:}$Gqa<+7_:Ӹ߃[f^+u7!uR2-AL+^?x-ܡOz7g\RgqR~5]; *_r] ؐllS y`\ i֋OZy۬k]^oi=,(ڛBZ?Ձ92i/8,4dL^8TP fB 5;P3lA8FiC6A^5wV9oc>8J]._!\aeۻFkj`GV >Z<Sj61MU^hwM8O2$mK0&i^^^av/AQsᒓTi+T&A=E/T"8! %%[K#V)7jb (w[ %QMnpR9&E7 04T!$HwKQv }mZ|~.P,HnlL9Մ6!a0jӼgT(]+)|94K/d^A Rs^{%DoHJZzФG؎GҼND*(kf',$##&pOYH1\M,3nc$^ ^t8 x~<%zo(ϕw^JDKmIo"<8YxJJc-Ñgl{UZ4rz U$w^qڕ$ c9YtiĪѷU%nmK v]aZvvaDž[I! Bb1(̤΁"L/9XN7;HUXo} \iҐ]muB0IUVII?_}bNh((P`, O>jzǕW=I}T6*ޱsYpBm(jXvgA~a 荵Tl Lˏ#0ƣ6'}ujfj)mvkLpg.l4}"|_[akQ~lcdT,hxja\+֖ț.fA(|s1p3 ܾymsbo[gU!}z}ˀN2\$әOJelpI;PmO/VYH6LtX}s&OppG]5# OVUz6k 'd(P (<_V[[W,|Mz=:@d/E~sx:;SdGTu^y7Rʐ,qu.kGvͽw8(%UAт>5pPj?\?ܖy-7<7z]Mmn/a=8!ACg 2j~${YZ:ivfvcd>;YNRחB((ůbA*<W,#E']&s2Ձ"T'<^uE՘DksLYY <]y fkmdg+㥲?NvQJMR_@y#E8ʝdĩFG|O[t3Oy`JzaԩMë籐n75y+>O%*m~&<}Qˣr7]iZL$h5ߎ"l::F'Œ j 0[0 `f*)`Vk\ߧY/r IoÅ+.Gt^d.aN&VtRh1uJsru4`_gl((T׿8!t5vlWmvlDc,BjT0=ޝ ۙlcZV-M#(U`0y",UA(K^V7t;w)oxɾ%7vyK|u%mj6=՛L"^"'[TnS/pU h'jNm3w?}w] nS}Ya!&WUc&WZwWUmgU{W D+ʊW$Zwj@;qJE+kFmxѭTբbV@\؈pEAhpr-Ƃ+R+WR%:*+lu:\Zuꩁq*aHD\95.\H;Fx5G;(SXۊ4kNZJ`j~۰%,/kBr}4ٔIR7 7x݋{!1w^$eU5H^læc8;Z >F♽vj֪S{+WvաM;KL"FhprĂ+Vkz+V܀ ĕBXH0D]\bE;XK zJAW$מ{[Vju}C\]|MS0Hǻ"D,b(+Vi䀫 ĕ 2\\ Hw\Je\] ,(p2"\!X*w*0~r@W,بhpr]4 a+VY3W+MJ%LLKc%-(͑8 ],nنirɚi":RjLk&j UEsYsyV4Dٷaj%hX}$*0Oz"9װXH%Z όvrqJp\Sz6OZ :AZ:"\oNZ=4Um;W/+Q`DbyW$W+ XN"\`l4"},bWW+ e8K\YUK kU$[ZғPYzc뤓 {,=<C)"rXFɵ6Wz컫F*\Kt4J1f$ΚhpQ DaD\rU^t#cqłW,7Nw\J9xW+OoDb.`jMVbCVEQgLJScԂPe۔Pzk`hJVziRI<0}vJ:߿ãԻذʮg57HriVxfeX o~T0~7lz?xW{uw^js^\SδJWb{+W~աM| W`hdUvj;Xempu9pE-hpruw\J׷w^^WX:"\`#˵ Xs}Ǖ2ND\>&\`Xӱ 4 w\J.W*+b Y03wrHb\] 5E+ gOl'y3j;Xu.Wzbz3Ȃ]< ,׻XpEjO~br%iZuy F-&Bi]]ҢNl汔&>J\Sju0ʾ<1Pe(MM_{3ZFjyM< hb9UˢNVXҵ=,~\i+5jX属;5*DNW ɵ chXVvh^KD#Eb,qz-3[=-LT)Es\mpuHӃ^ʈpE*W,Wb,bg:ʞz\I/ D+kM,"F@q* @\)o+O0rW;XW+^pEMDbhc=Y]UZpu2B*)$ XUZ5qey+PH xW޻bn.WNXTNH Xb+VKĕZ~bvp :WŮ,kSnvf_gy_}R!糴0H[Fv7x>Ϭ hVkLV醌KdvUf6{3>?\Mǽbͷ2q~զ SzNwNgkE8EUuDmsD#wmww_M&Ui^4BYE ueQZt!7Xx' +>LV{y~Ku I>\-EB?ջTj36q"B Z23.(|\IV$f3LU˥Zi,eS?o5Ok _ܙdMfh! g7bg5<2ܽvu79yleQa1yg=m~ƗiIW)4 )I1]wZyCv_=?˫4ʫoăF?moTAF)V m\jm$bZŲ@2],`f@i2_Zg C"׃~vUH48Xt^:&`my“zV`B&dNY*TYIc,Qz MR;t!XgLH긟u&a?տ^A7 uSJ_D_je>ARӠBP ! JJ'u8s)TVr]v}}oA݀gip"ˬAd&\OK 7j `V׳!N}~[_qcf zLvJVJ )  tQPGEP@J8:*oF!nt{9a&$c]{YڌYؖmS.pUZ[**զXڔj1En}Za<TT RcJTConF,CB KD|'A0Pj0hvV(ʂXO~O6#*6!cX>)*|=_ʦߤ%&}SD`'ة;"H!rJ6Gvb6 d{ëG>(%EfH̒7F_ĕ%&hL ϕ"{"&%%@-8ylQa$XnH7k e2䢇POIHD.23O=^Y+@D 9Ug&- PHOL~j>Y@>lO;xM<hR2Q 癓!E YE/YBْ: y&=*m1Ip6_ Cv2{Y ACv0(E[;qA[-YՒ,c%g`Kj,YC m-Dj%IJ )5BryHL&շOח8u9$MpMѐXѻcd # 䔕 pp .K0V%p[4Mj|eB,T*=xewWDں9^92 =+잭EФbȸb%dS@CޡN^J+%(^#ZgxkTyP<$,Z%U '\)I^n'BQnY=1"i>7e=R5>;>6-K'nr` ax:HD) ؚ7i,-F"Si"&밬jPS)NI YѣiW좭:z%Ȑg^:g[IyK~cMrqgR, U7Eu=jv:U?vNTE}[ueKe5YQ@Is'tI$c3nYSe,uƸcajP>`UNPusjFʚyCkB†TvRUM`}ahͨVdR|<֘wlG6ܝxEFcWЅEɽ8ݦXt;/c9KvDm4}RtY?v{c[N镈|l1a[vXᏦ+zNq)z7ڬx?5i:ߤa]L,fb 4j\J8Z?rJ:Ys4^GNAH>{p9`6YD)Q( ʀ"$dbRuRbp:\r2'V-" c.8S "͜ M9u0e}ϱF>h_!'ׯv{NHVpeH^<4y$X }pe-KCƯ̆ i@uv^k^^%$4U %Bj!iR$i;ߗ8;l:4[2KqR@C8M[͓6mM[hb4^]PRӜ5Z;Q5gs<]L<390TN3 E>:#.3-gVffJ& UVIYr{rDs)K.z0M7;R5FZA&{g *aao3c_,=c8bbRҷu;/6a4@ f8~MNgn>&pcS$SGnLr!-vɬh BhT0*YAn&ߎ93ؽxV\CAfǾ-{FmyDN!LZc?ecAl6Bf emׂ*-4WeyQb &d$Qub4-7qaԗMAc5?ED3"#"qFfg9KEr4$oWi4MJq`''ˏ'>?Y{iTt6k{{rGGFrpJTPF"XF#GتП//G兩ῥ1xNZRIK<3cAhJ'|\] t1/']($9dME%l0KBB2Y/$\x#@{$MZM9fQ:$d~ #|-m:xqpJ"D.Dz7ڢ異W|@NsZ:f:P+L*n@24T!HxT'@\I#2Pu|?0PZd 2, '2)22zS6 ^[))%+9'xMS An IAqg`MR[;OSgaKuy vPmm:5џѧzA΁0 }PqQK W%E*fT'tN9" y j͢%"'|QBen+ Rn}D%z >>]O kK`ADhHBUmu?xjJ+~iR,/)VYàWWtB٣*q +%M/{U? PZmo㮶:{eAdۈ#*fe Qv}~aETwk kZ8oGT_AH/J}8j n2O<_V%D==g_3d߆c}38y};IVU`lUM. 7 6kQBVL y&Tߌ~Q`x^Տ\,OW?wz?8 .Wog#H+xT*\ T]AT #%GV@AǠ(Ƴ'ُ1ڼ-ޑkMȆy.Fƹ\WmK{PXc?,tÅ|xŎ.Dƿtr&0z<+Im"!ۿ%N"0zHQSe88-Hh`aɜkX2#ze|^Oqv}u0#څUoQ*Ǔ 6%x/A GvO■{cZ%JuJVQymt@Y _/X;rEsu8g>%KoNuz!eOE90ƁYٰ3馲d`s A`S)3!J X)*h --n!Ys,׬7qvDӃ" ִOדujt^Mu+?{Kvwܖ{55-:*wA=w$p#9#^gYe0d%S :ĴDK152:>XŗOKl]jNz$-|#"hUfkx=U EG8ϋs\CP֝KDl %g^H[\I BB!0PzъY?l]) p i%!$x`t5&.H TGEY SL*Sx!cyPRa 4'FSPޤ??y}7u8,4܉Ӥ:^0C Š+|sU&f !I~ *09+8԰\2_z"\['d}E^TX(5 %0 w q;9& Lx' }^[u-GO'EhrN&W:[AD+G Gu|=Bj7M#VWI;J4/޵5Htvv::]–~%Iܩ5q -62y@ʾ-]͋mTͯ|\zv~f8[#va.(nlW$4F\ rp8\1Ւf-)ˮfDw3w6:YaB F$ײG@O\f:gMgmnuɮV*vwJ2-!w\HiXxKՅI>U^18TjJ ͉ FU\]t_͗_-}_}|oûUO430t /w@gyMkWޢi.M>{g;}>fEivkk@RQr0px<([VM|Q+힎ymT{(T+R@_v;20&6X`wsceɑxoܲN2GMdYB0C!gFguaVg$z\yDKyN#Hւ\Gm>Ji#xko8 (gst8s#,ksǑBT0nMx/uSАCZѝ4rsVFAFg/w4%;g3?T躳YչuGaOv0з}?ݛ`Bt|7HM$A M("+P%Z3+aPKaii_/KώƁ n9ECǿ ,RgmN=!&ji.*`wH$с>iQVum˞{BLh:.;/Ep7e8M\^ sKk[6vrsU}Wcf8zn0_O!4r6|%0hznjj2ZO{-Iu"JJLAy\TL8bvEjhDLM՛09rFe'rF2FHIJ` 8E2)LD:P>o%Bd鮺5r 1q<|^2=h7^}{gvUDQxB "#%&pmd۷0w}ݚLo"wM/;yΊto;~s"~hfv&73)%  u&(aY0yY s2<3y b5ڬ^Vhn1 IE>p yƑnH4\8scc I2& O5Z#b)ÔTh5m5&nhBYHu<ؼ4_{=#hӋ`gD2|[qZ~|~T?p;ǗkhWڙˤTKAqL巔p.yV.+)d⪢lj^~A^]%6⳧fh]tm.[Uv%(] 7&&E>{=d ~4֋mU31Odbjͼ>~_@kpp M\6%DEH`/L"yy$u+رKvep*pɴI*&A=E/T"8! %%V楊V)]9C"e`T<j[#TɆEdXy~8?!7d qqm~2 EVQܟldo, eU׫Uek~[E2Pwqg!\덞\ǧlm( ,]?.۫NvT:/#mmͲB,Y`ɢֽyQxw嵧5/ܮfO;,ڼΏiD& w;*BZ$W&qi2onksСֲ:n~"nn!slQ9w;Bvʹ\xilL Ps(+!}ޏީ8oNzH YgD`6:y2H},e=e!E,jbĦ .FHpMkȸ'XO\[N["bj~{0 pCۤH@0NGҺ$j˰%L(@É]Fbϭ:cm}y94oEoɻOt&}̔t u?qpWF4=B`_964NIH!SͿ%- s@%z9*<UnN8Mcn]7s*!v$sZ6,-IXk6AZR0IC21)2<)&ep[㧑|t|2۶9a/:0r(05aP\EI[ȡ̹C)Ƚ0r! Le{*y[ e;\e)otpf 2MAz!锭+@PZJtpJp4iJpF@PZJ>xAWo$D LY{R dqyk tpJq MUXUטJ+:wR"\iJ@ >UpU-pWYJ&;zp%(Ջٿ}ͅSrQ ?][r(6'c+ޟ _߽GӪhe=j8~IceonʹZl()ս7F_9_TKTʋŁy荆qchg]JW -@ L)O/iYr?n6cX!Ab:gvq_Wn <}Q,7j}B?\XZD OL3Eb9̩_zQTڻM%*˖6-:}>ik;*Y"g.<*0E;\: >Yz"61,nϱ㙥ܙ\Ұ}3L9쒀%0c|\'Il=XW=T"Bɏ#.#TRup Ah\en \WYZ~p\EBiMZW(5p-pUgWYJ#;zpTp cGƿhQJ0JJt4,#_d&JUzQzU$9f}k=^{ $na}$3;2;ʓ,]ND>ô)Ng'v[*vG?!/E=>s^$ #P rsu*qgZtJJmiHU\>#֕SOg(,EA{L,gz R3-"z 5%b?ǞCKq,d[kkSVOi, _S=ױO*iL55ZSZjYZ8},M5!6ۣ,n{ܜ;\e) •6mej \eqek"Z;\e)MWo*TBZ̝YKs:DC0(Jy@ f)i2lL,Z7!__ W GI[m;/1\}[5F? L\%K=!& DB󾸱4.0N:A-0 fa::rSZ9Ȅg9H "AZp4G;tiMJb12Mp PH6#(O'crص *r>IIRǸTIJ(2ܡ%%#" ID6=tiA 81IQLK}Ýg)PI&yg-*(?{;42^.ǰ4(Thn#@]"l) QX@ 2\¡B!ufw/AI88!%7IYLf)b'؁qoktuf(WJiRTL( W؄׶?I:, |"Z}W:a UlAg7d+Xm9~A Av%4/z3|W4G e Cض[A0z`-ePBeBE@i'rը |uZ  Db$A db=WVcG=D &o2~ k޿ܠCn R5xlPBs|3(RAUvXNZ_r!`x{7R/-;|yߩa}XXQ7 L0B6#&V&3x:pvP\Jo;@6ޕ&*fFVXY4XY.Ru+$x[}[ B3GF;KrIn5h /s` ѨTwKB*q`(CF H ~ .A~,. mM!g8Z.fX;6@!zjA jhjwXb۫ft aہarvƾ:BĿv+AZ964'w V#GXѡu7*oV1 /)Ca ۢ R،J#x, T'XWXڨbj#cR=iom-+5 Rת%_63T;UTk>@׶䠷m=801>@|5@RMwU dA`EH{]`)o"0hlpy/FWfiq0Q8I{m6!,JiC0d@bv-…Pi;bt~5[.ՂʠviL  kEЬtWIؼ`쾛ՌHPa]gQEQ>J+e? ՠL;t5gqz}s{,c-ٱe$a))0?|϶_a!Ғт7lo5?wyl>>b[Iq{nkh@u͗|\_^}ޙX.Zy{n0!c?| >ͫ3m©¿韀kH?w?eOSoOUT1гN= KKWyj:,;Еԩ6MDWLNCW תY::tbAD`3u'̣uQ$tuteLtŀixdf+F^]1䅮(p &f+F[o <:FrbI]ػ]1f+8zt(]#]g ~*]14]3(1UT 2pPl_tpRd`oBWHWΤD/q֋`5ܪ`.fkdQkߟn> /{~ Ԓ;?w1Ay9;ջS䖐Zer V Q8L1~(o-DWF%{IkIGLFiyrmzb1BW4Bm[BWCWF3kvb:]QIjؘ4 ]1\;d BWGHWcz"iba`O+>O+FIZߊDtQav,tdXV4]1i F壇>]]{禡+jgP>~bU d`5NW@IZQҕ}Rvչ5.'e}ảGo:GN6WMA 2Yuٕ\,hZ9 _np9T8>vzL'vCہէN&Ltŀ7j5 ]1ZBWGHWl~"2jg+5A:]$tute)$&+KQ0 ]1Hj}t(c+r*N8={n7>кxt(:Br[M5~bƃP:1HWtvG]1\3=vFIbKd8<fƃWW2E#lztS _ВΕ?:fyϻTfOC 7Ns,z4Nhi:2>)I8'/}RlOj KOvYV˛|.k#{]}3:lsu0m׽9mL%&Md}>TP9c}-%dc~RiңHiN]La"i^ 7Y|X^i}>QZ9H0SOD2u7<3]=wCkih7vv@W^S^mh=]vbf+F BWHW~ v0yquqի o^~urc~}o^=XWõ|/e/~2[Kƴdٴ|e}Oם|`0#׌yd1)&~x;0:/Pv(Iڤ]̲w滚60ދ*cυ.Y-')D1mJ9EeH D@1[wTe>/k[Q1tS|v9l 3b5o_vvw݁՗}gB^OFK[[]?0q2,llskn:L)]NMvK.QEj;wgWT;ME?z*r d%r-.ggU}d`U$D+QP)rEzͯ/Oqv9lw|&d!b 6_̑%DP|ԅqvJ+_HruBP{j6%U'rHZ`Ѝ㬙9{YW l=v?" gT2d4ܩp+(BdW&IsxEA9S c_8/ɋq$% M {~)A #g n),͉jZi>:I߾\}t8yWt1_8rrګbZ|4ۮ횦r57#?%r[q4;t<Bdc;X})Qs%+{)y7XtPЁ>˹,9+9;RP@g1) @N &4NM@" 6lB5e)J&^9` e%y>NŌlNFt4]|tZX}P,RJGFI Vh1k;ec+D"?5ч=!I8kY Z| )O)(2x DhЖ\ PZI' CAFL0oy(ricvc`QT.L<1HYx]A, f,&K1g踬<8z1bL2@YSw9$FD>(Uh<ںYyN \JcGcSJk[py]lNU<1txً~e gtGH:c~ڡ!䱋 xCݣ2_Gtɏi٣Ã=^ړ#Z{19x KT \rk񗍠i$$KJv2u(*x29hf,y!H6$#C8EGrS!d+|9A*`tc@3sgVhf T{xms4^m?c88b;4|g3JS3ZˠgZ[ZYq'yUyB!NlRZ?٬ҦA!G]Ev'v6HK_4՘oY>ΞX,I)JwpG)67`1;;~X?V\Z 96YJ(i2gI: 8M(dwmYڥ's=&tk?|?^֮/rU\x0rϼ>!S}?67r}݇C#fCT[m)~qP;q'QdAmҸ'%Z}uceco˟陼j|wKۃ1Ko-]jmGzoD~ϸňO-0/4\`pG ]"SQT"RΞk8i?jn<D[llآE#"XNeB:5Eoϐ: (ٚ] :"$(/|2|*hu IUDDSb4m`3su}i"ڷaǺ z&.MZd넷Z M Fg/Tö>Fi9JHd)L)m ٚ[Z2gեL[4Df~rZ/_c&d J*>x|~U|rj9]9̙*?/3 Ur1!y_N qUK$M$ MAܚj;N{Jx[sR1PKWje+F) `ajc@3 yơXc:abƢzoEe;Zngi5x/ t8MnF+GlN:p )7ٚB`)%Đٓ^a WFdjV6uBtpn'|( A%Qƈ̜;Ny+<nn'$OL;R?S%zUPzJ邜bm,W0Eo$pZPgIS f*RlG(>1E$wjfx'utVq("BcD"ɅI9(HJR Y3ŕ*! q'CscN Rgu-c@ r|J9NBƈ̜QGY cl%i愋'\|rsPHb=JK%G\r 3 vm*1dw"[|. .׭P<avBv6˾k?[v!2^~|#^}^R>/Z>GutG/%uKjWy`Q[ h XdUp7;;R)B[53NОSroU1AE`U,j}Wq̜=EӈN~;cu.Xhw3ǫkq~19[޽l}h/eJ9{8Ok/rDe<'̀BhTeHPe"4mU'DxD|l:>ß jNN2*<Z1d .Ɯɦ mSਈ,u;cTwQ(  ĤC3s+&xBCWwzL>]./ }+vpѾ">ۭ.Mw[f_9XUVP(:6 FS2R:*!8S68OL=;<_P`2F,G"JC 5⊴AJeר91` E( TiT 8{ci JÚnL0(KV{2lv`V]G}ާ;bv_т8T2QEZgv\gو/B}6I+Q%!fOEs(3lR"mJcE0hK4 `0|2W- ৻}?yi]?-YTp1zb\}h]׶JXη4Ĝ_8\1Ŵ x0\ W\]&KJ t|cwʐM_` ׯov]h0|ruqw>i ׮{|VŪD lo__=P#[N *CV뿧 0)QUćKJ'`pewhykWhX(ݦmzWPiw:k2\Uhٷ+9~!&W9x{Y6(.R7? S.bpFrP_1vT҃wmmKrv/rmeujOR8u*qE IYVRW $E"hC, {A ²#~rF#u^wzkF g%XzWg9wr0Ln?b]]U#?V [:9)dn9$QCHZ,_@Dw.4O'4syX)x%*F !*9(!MЫ#7_/q㵯~orn a`&J] -n{1kbɖ:W 0>V8 *jP3@U{K2(wcުƴMiؾ_aCC3\5XTӬ)c L'"t֑}#͖ثKZۢ~SN'"jN JzڣIITEqdݬ 6ƃI)|/Rq7.v b/k.ugax7^춎A.JW=kyf xs1/gs'0F{3 p. ġ'.M HS IAS `֞>ES 7jjw{wNɋTx<)9IEdq̰4@L$-Y XL(WaJ-nm`֝KPbt:k|iBe2Q`b}XP $zKbXw(0pliF(dI I">dӎ@vQkE1#:- ?WA8q^p]Εkniԙ& gH~u*L}a 1HbeݤB&#¿_$dav0D?P<#y(w/cOpfegXH6dD0:}33[8LAޞΫ{JE pCΠ2y`ɥ 9>+ : EϡI~& ~~)I5J0g Itqq>/_)(!tJsv*Mk c]`NGr ﳕ>Ux Ʒwr1\Q2'| bwu]-Ջ]2Tp  8m[k7#qm3˓]HQa!{Zɇl'6{IQ*A[uպՂI&i0x$5ac8vT2Yڸ:g8rTM F0?}xs7w~w}~ `pVGy> ??rkO>k.ؽijڛ5Mۥidߡ]ArMXb|m-wzfn &m^Й$>GrO7\UEA?֩l*w @Oh(.$}ƅM< 'z'q0 "~0Lc^kTJMR*u8 3G-42rIJ  @''2F"h 8KYL ui_dcsb/=E{ݝ{7k= :< Q:Oãx2,^ qMfa~Z)l &c8KP# $g\Kr1rH&1lt^iGjdr@SJ XVdzSwHVPJ1wZ\lH|cC&0ȦP.)qeO r]_^}n~ % a`Cb xozSי˔c*^lȄ~A"sl>TEըN,RکTGRbLZy)^?6J+$TZn)1H+h;UjoUj5po/G}zo{ybV BG.hQ {lP3vąZ.2\vew#`YT\8/Ҍԧ:X}0 \.ȐĊ$ܹ8G x"#V:bQ6¼%ԝW)jї\捯ZߊsV~#$vlO73M%sP0EN93Xi)Lw9a/jˡ΄jc2%#21̢WF*IP?ˣoD9&r6ȹQjEw9lV VBִb9u)& !vicǫk7ۼ4vDrJl,N9#ͫ>]*Pvݛp;5("d"rȕ(G$O wjA{i}sݢEK"A;a<A4Py (4vPbhTClf4ѓxlGᧇ/ĄߘľlvslVu>7wNɸW{ADi)\R ÔBR=|ډAHx\8 x-I`leF͝QFGR*:dY_i ;Y:|q9^u8wCT j[TI¸\XZC'#,mb{ +6e` 0GM,£jnDh:vF&ħsg.ݶI ̑ \J# N&\"WSI#Uiti4rBOG\ W\"NE\%jYUN\}J*9!qcLNF\%r)=qOқŕ.a%ExB&Ev~ [@l N/+opZ0dt)ӔKxz~+BM{ ~74c+ LPfgAkWoYcz AR+b0tMaV< \+PlrS8Tb/Ovg~;ힾzƄ-\SNFoHy*zC$*E7|zHSW@&cDNru**QTŕ!މ/F\_$UH""ӳXH0@iT.vzgKyOw8=%Pe^͞ˌyȊC~r 9#Gd 8M44ae^'Я7{~ux-_?O7xn%œPjZ 検6ec6!Ik.)9JBGD0QE^J3 ັؿsP~yw:.'aX]/p)?怹xv-Yw(՛EZR:k"! x=y]s3CER{-$z vT<ٛS՚.c];w4D^RgSy7 =C$NY7Q?~^|nu&or'hܞ,j}Zևj}Zև˄2 %?eBfkCYf%jPR .J&6՛imZۼ6kmZۼ6yrr1"F*gs|f"fEY)DA@dh#"BF&2!ɥgV.k {iMwt:Ǧ;>9>{C^%#"'k"i5 Z9$4}Ad G\dVg'E!/?X:8Q!JCeAF݂^")iqݺg+ML49!)^Q?=ATE#JLF V ffL9Iie{cWAǜ1)2S!'{ϬBR6fAO 6Xebv^md\ȹR9w#c=[VmPTBѰ^J2yOn1{7 gɟ_0xv|<;,I\h#kz0Vwkm NR?{[ 4 (^HQ`Sj4x rw3v̺,%p:Y0vZ܍~rW\}Ajܱ-jʨ-j7 V;3i G-Y+kLE.HJ ]ׂ[ *eȁ !- p$cVd2*a5rީ_{`;lP!bC8ZNY MG0:8M &g0С*":##ߘC@Y` G,$Ml,C>@eDF݈(ѫE}Z%⢪b8ޗ*u*{b^ daVuE+5,jÆոc[<ԕPoa+=mmAp- E?aVAs+wNڠ[[ zF>W,dփ{TN d̖y9&-!H+;vLNu<[a!WJֹetB>B 3ӳu r-zUZNo?񴴌l0Pf@&kT (P$<@*97Ij~I 0|>`@-$r@QdQ@@[ Sä c +[*N| nFΓu+%6(C=y]nwz+.潞R+yZh\Dў 06 ,+.!ku*dzv<_(aPgt)q+v -DI!+\CZ+ƅ_55 8|axRo2"1%re])՞(P䤂 hrF5kR_O/֛Я֫a8c.>( \ڜJR /<\ttbhzx(e-aJ;mʒEf2/QEj.)9J#/3"*6nۄ :k)1 Ǝ vI;Ϲ R{-$zxsh84߲uGd.'\+6$"ݯ{= @ݥ[ ԙ_SמCJɼa]^z:w+xTBX8<c'c~tǍE =GpxnI'>toq dx#,!GNj} x>KY4;yѵ*ipY Q_?=Po6\~W0pٚr+1a˻~0 +1}qin7٫V$~VBN&EeޅqZt5xrPݴ z<4alv؋ɺPET?%U~]-StWoԛ¬כ)1+p#>bk@!ck}mL-; Ofse$u !i//nbO*Zv6BzvOQWб* 5ɗqBaORCK,Nm3Ջqf网|}M{l5,9=n:ZCV/v~_ 3`>B=LLn;e-UfNܛ4A?mp7NzV7 yoszO]T<-3#/IVW3,hQc}I p:=K !w`f6AFftMLx;(mC~qO杺+봅 m}ۅog^@-,(4U=A;kD5̔&3Xr<Z>,JQ"b>ǁ5HmѲlPZ/PC0yS>JR)L*W80ף~loh⍇^@]X/݋/ B_,X,t1_*j/ x\WP%})aU8_g:#~94todn'{' /J;iڍ\vV.jFQYEO.^(}LR^e_!gYἶk Ёb(} M:\zkʓ|hI疖@Ra|O{Cp8(`gyp>=S7_8+]&+D]Yf]=y7v&}py |w~]+[?jX >8G`3 y:Hf[_*Ȣ,BMٻ 5lJm3OδϜ(0(eϘ#')Dg ^#rd33MFR:{md#ڄ0")H7.R^ BaF8k aD:y455CZ3.C\m~d*ځ03dK?Ie:W\ 08Nk]9fj3ٸIOZ G$(^lUcʤ+5 Fˑ tgPGدHZ\qFoꒁƢR [!Iy/>1&M&ALI MybPpL`QHo]R%Pΐq<vjl O¿qIQY2ಗ!|WPP Œm2b&%1 zrkc4{w6'dr 'U¢ T 4 %EX i#@c&dldlBIZoߗ~ }묭wbP[Bˁ޵q$BewqT !,pb}H,;zx)q(Q {@nݸ; 4"$7 u*O&U[$~s;9.wUWh (Z93rрC萹5IF+UD),C҆RFQ@b9aLࣱԢpe ڨ"٨hgmro+И8[T{N 66P&r+V# e=4(MjѓeWpѳc #<&ƃ^joL?"SbB/-7$sB0I Jujޒ%ɴ(ig$J[ ^*!j 1w'ם~ux͖.<&M :yE!\'+כ[Kzɣ6]*@ٴ-#-NI*<#θ"0+jz.qEZ#O3HiU rIF5c u B Dh h]Ĉ>@Ka=hyo 2bH7~4 yĠ9RT|l V`U\ϥ:y+^ P!/,ǟӀ?ﺽ8u1p?,JѤF 5Iy.ܔf]Hy[|oӟ7O] +'j'n։'(|nH᠋JŁ̠1P|Ȕ3CzqL 7[r1H=KfE dIR<K] B'>BcRϩ?/3a1;?W.:&} ]Hf3Po/'Õ"J:JIo| th1CF͕FjXs:+cݙ^௕_+?T\>x # ann?Wӹan$?Mwj]1(Iڞ@鶮^ߍf,/ST:j0-]NAe>>ɦiU+{׺^:.|%. F4}7F;%`#=V5ئF_u˝^r/?_~? }$ҺNz*@B']Z] tl·W&\6mevl6:?}Os,>KnD[@WQ؉ظoDN5QIR X*JLDmU4";A9Kj ˆRCS'+J΁WІRqRhm(~dp$_6C+C"YI3^yũ Q%2ɸ0Ad^:'`ؚJlwyu$> z{ HDr@-$vUƋ)n6YS &2om''N@H@1d9LKSr9%l$i` ?6d5leo,+9R Rc{d1Yy(k?%FR)Kʐb/#Q_B$&*)͎:mR%TӜD&dwT*n\Ӑ 1BqF9-4l2KR\I?S6IH10xÞy:q48XWZJ2-YAX(i|{h1C6Ujfxq')C`sC"#x(ŌMz}BmdWB<4MN:-ϼ2lVP슥߶͑)gi'eἤ^_^7`D-7Fp<' })@=֪yUjV Ϊ4 k5=  r4,DJE)M(96[VHɸa]4K %Tdv[ckk5 y`X Bڱh]6qeJ%dD 1??C>G>htMC>.\)03УϣK> L]6Ak0IRnPSFy`- ǎ؃{,"h́},YG Zy`!EAr%"9VʉA{eC@S"*U,1gp;-\aؘ8[F d8|fB<v5ɱfhn;D&{cz*m%7VIH5* aJ #YrtQ4~Y.2{Uɲ0sI3' ֜ˠMmzc<⬯`IKY\; >ZZ5dMQֲ0kʷR_:`25lfߑwt*- FRTˇ/>lõ~ݤۛR=W3vn4匼1uT?L:hyx80jMo?OfSB>>g a+V4RpN篝Mw8ff惙TrϿFxD !<g.fR],R+VpĊݥ|nSUOY(>ZCކ-nweIȐCMrE\8SVS7PJZC1̖\Pf ^$0.#kqՋB_^&2)+ueZug\p+H]3RWE\mE].T*Rrh78ה3RWE`gPe"[TWhX 2qJCwn4./V)ID=T<)Ԗa, wU FW描6Pn7Zo pwHbGOUH",h y^ \ \#ZOi~񦳔S8gM"HKMr KJ19}(#H@]}aɿY Ee?p.U3-en?ÿp?9ŻK]2ȹiW-< {wO\kE6 /y;^miK>-xҳ2<]//i2<3kM[ rd9]$7{f'_#.K-R~K)x%mF|(͆.Ԫ;[K5T1Z\?Y&qxb P\J-\ƩFX鼦 \)JAEo3u^r>-@gO=IfW6tS6ya6&:BհW[3+Q:rBZ||+I6?d[X’][g z33|ضHHH[˚/.)9Ad, 쨕31 c$%4Bee?Q©K`% 7"f[ );Fxgx}p):4ԡA MV4W3^\Jzf1)aw ^n)e@ >Zy$)(0{FaE`&Lŕ %LH+N>Hgن)~;aBr§ڻ&J3 386 b2$o\Tz׆K5|ۗ5Oΐ>{AGq2+/W(Y)I#uL2z)gFaJfn(*fS18-HgՙYJ{gˀYP lG;?}u)wnTb <_:S)/r}Gz $ݒ"l쎶hcgMM)]Zȍ SHGm.ޘ\Lx7t4gsgj$x'4OZ~g-/ݹy=Yͯnj0m<5LE=Q>j9+ N =dCon̽m݉z'l#f6nפn͒ȰhR o&۠K𠁀!LRPzS-!՞J(1'jUlݶbU^!&{xӭf/g'Xޟ9[c+ͨ4b6' &,<SG'5K"9ntht3 Y#id ?{͜O[dc^hߦH6 omc!mj%N^dVZ(ie VsΞ33r.>л] FlT)5et@0q6sux6W8[u\h.K=5=Vzgmߴg=>Co|'] b9m&MEɁۮ+ֳOmCVC:/>'jӭ)S/ O7d]׸ myNzGY bK%n5l趘'4T K&eI-pTLG9@튘0F̀1oth"A}wG}̮8"Gtby1^@Ao}6$$(9Y8:pdS!۞3x=D\NrbD41ת֊΢D46BTjəj-y=TE[*d5t>x"*dTI`C&݆ZLىzr5<3/~=|\;ǯ9_=nqjgv(*߽3ػCPt 5CAǵUﱑ19#Jָ(s[* r6};I)BJK W֔\U6z678LqW4cW,tMXXx)3 ˓[?|//wӯl^-a FecHI9rS1 UJ两VL^α -BCwX9^MDM};C9HRl6SFw%݈䂋/=L;vEm?S)VE:! -k1dǖ:J-D2X%r,[ЂzWTjJtթhJgiALQ ,$0s7.uP\ƽᱣcWD"NxF#83; , KdBV,+\&(ze("&ƨɩoΡIzA lgɅF}RZ_\Ch8L۪:+. 9LKv0Ä.ަgP=ZVI ڮ,cͱ\$4c.>. ӎ]0ø> B,j MяOb[.pnјw zF>7ܗ띲eg5Rp>g=H5f429D57'6\ܱf=Y2 P P,&} 1`m snYWx0~|'iOڐ] u9(2qTHpĶ@bChbM0uj &`x<`+T|\Hk4)dH9 )ss%Op3g^n_釹2vro6$v+.zY^SvV瑁'FW&T3(Y%K|v1$%9M[g#L!OH LIZU|硥nqd<{u $ yFUS8zJ 'Fgojk=RԈz)X6qU{[ %Q5dd2`ġ"R4ڳ{p݇e{! xu[UsK/LA(Pm^ECMzѺ{bf@ b'q%4m{\p4w]cD׋] yyه?]rlE=b=vGq|,IxjF) 9[|:_px^]~A4?_|ޫ__x-W~YaH-]Cq,+_ж_{dqQ|zZ7 gbodF~\cc6>;<}J?E c\ ʺj`V7òQ*&^~ۆ gKfߧot]}u/yϲ^\^^7^{vԕc{v[C>>|W>]w=6ՅZ,rb ~#+|RsuӢ}woFc@aw)PxSN-ET=^h2Gof?."z4M&HNxJϭV ^p5c߲m2;S#?B[fTr//uJMFɻ+4٧^]"RYhf[=Lwu>O̗jj1CeC-Xw.\p#V/ՙ]Wg:+Ȳh!bZz[m&80.p/R`BilCMQAc{__TРw\K*15d8pbcf컂@)Z{ /8XڛbQro2Gs;=VsZݹ)[l>UjL\^2.^zʋ%?E=k:?ߏO>Oh\_ ^sO\8wvkpҮT%|3BwO9ЉR!zag)8?"ūeVz`(:Q* $s7Y˙]Nz|>,߭WFp:A-aq^< qՙ_e'XكeU | Jb){a|N>oʜP\>9Xс~z%w4uե}K.)  jc񘕳m.pe |,Ao=dnn:N>s \qE-D$u|H Z )#U/P` "Cr}MRN*s*$.Z [et%alѥF1to ]^t~yT []wes5Guht,uC֧ljPlSAo,˛o'mSCƖREgB$F!:Tu]E 9Z8!amaw%Ydg/NA"J#&[E9$D -RcƱ'c(;aأ.&ϫ]D35b2>Z 0ԨVslgvW`f-[g<-mz\U :(4V+cD &MAP$F}p_8o+cɱrWR-淖)+ % T&xaZV9=O:7=HFPmkӹ2!$ɶD[{Cjŋu,!M18+aNpy%o"bsUMCQdTLČxN<hZ16𓵍[=Ez蛾i0;LEv5>7E)e[ET q`(cuT mMGAlб0y3s2WiȮ[A}b-|^2[\rTrۥDGɝ'NqO \InrA!).;򬱴F3j fX#)O<$WHh_BRvI^%'$iIlFabJ75R\z63JSQ)=Kb9gʃWEk+OtK^I_UGdr[R'.or"oʛlGԮ;'lj= ; \x|% %ř#3`g?sT/DC(<6%GD9@@>='BEz+Lc9&c#}yUL !w(N ll"@(DDQ9h|]n8wS(Oh`_/*?Z~"D}x/֣' וsv砨0Ps44_,\]1Sa=h" Sh'Y0 fi>Q2[Ud`V-4l,%[f)e\9N0{r|Η}XM+bNY!R,`ţZanKAbf#IeY46=MWq8Q2\9@$PsO8ڧ~I? 2 ari&ݩśb%V<.1}@Eg2iyIU1&e(L.Qbj{f27 ^Hw@dK-$ uipBpºn@|pi<U=AEp8D y)A~_}-?^\.-4l$D}:MzvP"Y=6g0V x-aCy 4o囟?̦q WT`.̗aeR}[ hjϳ@٧ɪ{Ц8ژSؔ ٜ77>xҍmlt(mËt[ؘ*AS]drSJWdf94cM(r{NJH6l#;@s||uO7?|?}՛>㛟~Q(006A ۣ0wϻ'qSICzb$-wHW&!ݻG:m!h[Dٗ2pRnV_\QzŖQT0]3';.5 'G&|*{ Q`C<hȡU[5Ҳ5N-ؔ3j)a,U48ic0a;^\Q)Nsl; ;›D**kR.K J(GR&T5 +c1S,Nޭ4sTW$6&+utuk٬h뎐a,hk !UÀw_C@)';dS#NS̞KeYOǗ&Nj'A%IObF,Jý&c:vܴIxUuIt=YVtysGL|l."Zڦll_{^;DrG^]]?u Zo>Il>bH|Ϣ) 508EP"dS՗u.;l My?yU[6̦̜Ef`L5vcEKV ˊFm]:u(H'3*tjSʧQX!0'X`D$k:S㰗z$ZPj!N[? "MP౑B9̔M5 .ted)9VvkɈƠK$%\C)&yg)I 6}18O'+U\]7ז1P펷dWYadT\n>:!,WvS$V$OzέM-S琂yPkq `+yڋ(ȃ6yWr<7 AБsI_b77 s<`yV;/,\e 'E&U3EI#J$ ww<Ɂjg;_@Gt 1w(,͋?xmBw1N 00Ԃ$ jl {~L}#ěP'2"[SU p4eD`5x!aworؑ0r^Wsw]dJhFb3059d>Ԝ(-E*l `&*1Zxal-:(MP#!ѧE(iqa[󭈖42GEs&&sIװLˋz@ʙ病晿9p6_vZ/tj($12︘" Y) ځW w|xy !-|b~ ߼{'Z ($Ҫ$aX0s "s[c7\nh8vP ;᫱U{3Ԇt`؁j?Caj?B]=JwtЪKY" w po ];]JeGW/dXUu[ ̎J;ztES+*孡PsW{Ԩ/N" u{ ZJJ&:zt%Z 5tR h:v (y7wJHh]qH{\*}jx (%ҕ$eC=tg* P2MDR1ph)6# ԦXgY+U[h:G" (%hEҴ64KwW|4' T$EF.eOK-:ŧ=2N@uGhsfFX`2^N)MbfqiEY˜|\=T.Q-VѮ6b4Ug{ҘY':N%DzZjIxi/.yNQZQM۴iJ+h{\ޚeVXtz_٩)! =v0{¥t'Z~]{芢 Ck, a?U+h[*U P +-l-+,i{ƺ h>vCT]Z_!`FTk*m] ;]JpGW/-+x{*UV:=QR +Dn1G_\Yh+hkHꋷi>R4 TTx,8-3?.~@;GvtO㕈vjC$"mm5j&d}SY?g#}3>]t 2X?ʓ{;dd')wY™$W5g(A=̓Opi}(ES19P5wF#NGڂtɚ>$ k$}vBW+.m"'s{(%Io}d6t]NN ?˘oc/{^ nN {*0Uh^o7-wRrDhF?p\ol4OC;UѬv ?zw)gMEz8DЏ~_ې>THJVXS jXyxZJG+j]ZNM7E=g&ѷ40L|in/+C1&C{C"8KF_6 uT y$*Sycz*0~2xn/Ch ,Mpi8Fk+F/1 FQ]|M!\ٯap P ·Ͳ {wy_gP JLoovv/I٭a.^z+'Upm?Y+lX/azs~5A-1_̀iv0PP<-/٨,x(;OWԨI*jw1h]n_ţIޟTV-zC-ΠdfT?Qnػb˚(Xn>yT- , ,MC,(ѯ"L)&7k)g1I=+?7uЏp돢BGyy" s r ߘʧHxe>zh aݚn }SnhU C ||iB }Zƌb+0c 6Sk<񕬩,e#Qڰ)|W!Kl2= ]M0"VPD)v#it∢.MKkJV !9H &F${j|*cB)7jJ9Od*N)KcdqRʦ$I$!RHDSQΊ^#%^kBgE; ƷҶ|VU`D?&%^*Cn}Z9$Z4+&Cfj2P=LD*2\oBUV&'ne얄kvtoЬKa|1LM/GրyVFإL;k$TlUu't3/Ğʼ$Ga^+ʓ4IGQB5IAql\:]¹! FK=D! w 4q$ƌyLR䜖)&MFDȄo^PdXt|s7{6037`j>YXa5k>d/ ;0{OQxi'"Vhܞi@_`m:z&=ll"1.BeCqf{"lW+a4!d !yk=m{'G>!Za[[Ǣn޹Yz_xO0nF< "o5h!L9֝>}>g ʭຟφy~z)qnrd2KPRkcGEc^d@  yH^98(H{dIS왑G֌Fil7`jvǯȺoWnbJL0@z@ka@1Z{wm{k%K]9.EPI9aό6T_l6\1$VD2w dYbU(jL\ZтU˽ )hdBbg{8;z/J&6nּOC9ʌmPQ8-hqYgFG)(TSa6{N1r=~Iqu>r :3S;o=^wva^Ņ:zde181<$?MLyX?j9<';@._G? v+T[`gKU ICT!Ij`N!I"~[k8ߊX>^\{iբ &h'@ʂlS@:CլZb0t2 300 lL eSMJȨ%bU BJ&%,qa߼oB|~C"v6b=eZ-~xU"S,)$ i,Km2>B@pOqm;krٷir9ᗍ(>{Gߞh_e݌\zϽ>k1hrBu1/~jqo7zɟ1W:c9c|Mtvog¢.O'Mь&z<~OAN;gߝheZOZFof?ch_"K'.i/Y$ZhcYw8\ea9^X9{Y{`VfzZwO;çېMj8\&#.c]uDT}ߙO&Jx3G TM.9;=Ĝ</-bB cd<<kKLSٌ @9H@>X'd3wg0h ]nLo2|emCdN٭'ƕ7~^_f=Ϲ#-ڡ^ojͅvʂ).{}G ;obe5%hH^[g5`Ζr@7 MAѱ2\QZBH)X9x ,4\|IʐE7T%5>*_)շnA &LBTlCPXާbEκvw`f-Ge-zLet2(T+`X xhFB6&u6&=CW@N+*BQ],"UZV>z[E"%#UrV?Ng?in#S&٠=6@1kM\"'.E2wfgK'DH=og;BqUM% $HcV9D25bіAՊ`gk"RZZZ蛼ű_7Lg[=إ`N{Z 8$gAA0,Thj-#VQNBGz 1 ̬ř(Q9e8B0:*|7<8y61|pG˒Zb 88*^RQ2%W1D2aS)OʧlRʀM3$Bd+q1 L"(Co8;OARn)XU#2TTjYJ^|W,+ױs^#B&b)B6*_ kbPkEʼ(rSX)-ɈI'ēJ[vDU&AXJm5rocX|9ltL6ruȔ\4ͅNTH+!+ DrV]ܒgvmoxomGWZtc{5~18LC 1νVm W qPnHiH+A4~kr=k-Oֿ}U%ږʁHC5hmVC@tTJ*Pt"Bb*pM%O9^![ΐ+z. >9%LWCk;tLTHVIǪ#gi2-)K7k&5^ur+ƎdX59mcհ_0KBh28,"dPP%*ɥ8TL`R(8Y}ѧ~Q[f,N>zr}6N^+ux{Ie:g}uid#z]Rf2 ޣ(bSd"6-lAB&1ʓ In ȱ}ޗ7jeJksgAe? }ۻK#S~n췳gQjMf5i3JO=֟Z#ՙ{xOS97&{&[ 1>nnTzcEm*[(j63L^1SקrxI-t,T3vvU!D m"dP8"|jPՠ:Hl$aƲ@- ]FIAe"NoP ]50pe hJ-hSs U TEK%ߒ; vg]G!V )~}_\~9EBJPHDmjY쵈[<4Vs^~be +B4*X@V>;>q4ѹ@# :ZD{8;kn˖.CcH$muӓo嗋SPj%:y5VhL[2W/֪=C392Tfɚ*9bFo"T u**X*VvsTdrhcO2z ԔT[\6=%cF1[^{KZ;* C@7 iƾXh:co]]7f2b !n~|g8HqSu]{8a8eUwEIDQ--"^Wk&+ FE2cC9QsΙDP 8BT&ݱ%Β< ]4" LL'Sb0Q b`YM&q`Xb1c)v^M/pgR:xiI~u`pY Wllk_u!Ui?E~pf_$Q~H0_~iaIc}N7%aM ?,P]5,1 xo.cDG>aY@g8)(F"қ')(lvpڛ́0yum(lA?U6 .*K5\ wU4hc,>֭1(Ql4HV}`>KӔ!|0빍"\5Øuզ1芣dzxfJ#ɏٹ:Uwg)0/Ņꋗ 0\QRoQ?mKY.H`ϋ-/̨nA=g+%1bb$^[`yh\p`jŤǣ颡ggeUbm]]ꢒJ]WW:Y9 .x =+^*U/0&~+_3ןp?|ׯ/ׇ/^?/߿~Z;hyp):~FoR{v9nh(oQ4UlE|rikʽ,>bIh jZ^OYx1Xzɖ3 ^q?]G>G6L^QET-rC_Pb;C`\4 fWH等I)|"GJG|D2?,{jM4R*_xͱ8,(//] :v2rIJHx} *30jA;'TPaR1ix7ȼNؘx$bo38rUNެ<,wͧ4q$Zl<s,O'&k/6$Qs-9˝R +c|&m"mHG{g'th7KG{}РmIh]~=#`.s)TI@&&UzD05aɮ]KpO=N؊i+~?+;[co^Mчކxzz+;n sxn{lߟt;.Lۡ\`\,*urVspjwPS{*(L"OF\[%խ *lY-48\;Ӽ8 n^ٻ煎ʗTW~., XtjVJgѬ$( {Mjt\."^^Lכ{H:Oij]O)bttڻ " ΆgOFOTslt=cHbEdܹNGO|IE4* o#[9Ū659*ofCGu 9+\`f}ۅc($pFu2OgKl0G x5^rÜ9Q [cI%Vjnհj}{5΄jc2m`aSlI*u#,mS#(6kQLe<)VkʈhA #(-eLDa 18n|Zt@sO jͫ[νo|nG=K4'wo͉RD@&'+<葏p#,{!]g))ײqϲ==/ƙ~4BbI 4>g\ LriQt@Bޑ Iw܅$;Br!NK@Zt8QDW#&0ŸdNE(6{dB4D{`a1~:E,Lo~fUBYr%5(X ' 7?l|mC3~uwVv:c[.Ly LqY̙C:<Ncʠ/el<y)▱'Y|ZjMJ+x]:KA_W!r&:0D)X2.*:e[ñ(c[0zq{^nll2 ӸJ9XZP ?@ L Oۯ/(#Y&-/ӌUI稥+h_GKzc0_*؈.cK Ӻ^̗V _3HAiTv]yJj*etgtJ9TKg8T@`LyG͋ꔗߚIdcz_O`{dOf~-9Q FK4¶fMkӐO8p؀w}6YEgڷm`,| 2 "'Tm^N_GqQ+L(6gF\sDcVCcیXu##v GF-z!^[^Z{뢓79u{S(x|q2LC&?7M9_Sƅh %ޤ+wm@/1&7Mgս$ƺ6:xk\YBqt&au}v)0zՌ22x<ϝ鹫#K]|8hULEϮV9n\uoh\8cτLLJoqBWYm nK<ٿoiwAh&4PrMSc=.msE Ց&B}_W7rRLjGq?)#SXf[i52)]* Zl򊆔dHJ( +STxS}V>kJm?=tk-+ڔ{,qZ3K h5 bndî'=$.fCՍRn$;FWtE:ڶ1G.F9UBwtpn]`LTk*]R莮 ]Q.h+P{UBKUBIiGWO@ɺJn ]%!fh$"]qA$G-U+Q[*; JEGWO`ifKBZCW .k`B+vJ%;zt%;;fo ]%Zh@9OWpvV!N+fA%ZDNpYkfDZ1R"M+]`iur}ubnZlb:K]!Mr&_Q\:e-Âk"-m$=+\7'x'uOaSpہn㇞\+nJc+ vtq׿+^a{g`P%?E&f{|鯃{5CBU4W{ BI]ˤn):Y cgY%~c0U&GPx Q `Uo!{We&kJ' VJ V K"]b]͞Hʬ&MfN`(.zx0Vh~TV_lXPy8FyuÜ[49́ݨhM855E'^!Tt#Z˱9Mg1) 0p?vP3;2!LE P=XXE6`KvG R2\??{WFre ]lhǽ!X,2X`}X`9F#"IKJ-go-ZNW:f#$#c9!^ܶ&\MtoWեaj5d`?4n]|몢 TĜc 8U>kV 9L2ܜh}k5T *ckds-6}׶WzNѦl$Z m1Zzl!#V@RMR̥fcהּVYPR#U"πQWvO͹9 AUUW-wrZ<ZIIbU1HGQπQgcK1)F$ZBG[G7h cm VM^TBa-sU.ZHQՒ$TH,Af-7WqzUIXOi;k_,HU.C2+הOtOVX)Y0f;TrNU.YGV+ ZAwT4,xԬ;v4F%~P4@ZE;60 |FoQx5&Wn: uԡmӊ# (%ʭ2TOˮ/ \ŐgK'ǚPX[]JP\ZjhLV0Փk.keB`oQT#!&*hJ֬C7C**T)J`_;wMBAQlJ=R}A19vx X*P kRTL `6+ NsI: d> AAQ)tJo%Ce:|=cLB3m v^Eo[PB]ѲځpYWBoTz 2nPư)0ur9Z ʄAv#bT}>Mox$$XYuavT @T|B|)q488)S|K`o `Ʈ pqì Aཀྵ̬%ZM=n,\ JӬ#;AJ` ;l : C彻r{&EUgU]%׭Tq=YBjVoABb|)Sr|;LJDFBY3'Y.k hvІM4y \+ =sE |fPvܿܣ;ŌTUqǬSDs|1&PTؼ*Ca9i #NXs8L.!wGSݺZ=^ L}6=DV&xs:p4(Om:J`#eIW=$ B+uXF SayCs ~,{/,,tF]8G) >@&=:)XeRȆZCN~Oy L}l$kukx4 oxs`JrAK0-VX52%8Zζ$v( ]C2jB ;h gՌ3{v=`0A:2Б5;حncP:LMUQFiv%&H?<(B;썃# תџ a¬2,J 1tӨL+(Z* sMπ#lC$ j3+ioTJR<>zrO7mU٨$}AfգKP05>*М\1ֹ|^nl=X&]n67sp&~AwH7llrlfѓE:4%GB{'U2bq0Yk Q^yhY=yh4vMfcgry0ֳʌؓUq E!9,sE7* v#b>W ]XSA  CJ@2%ftiPp}vay+ V7;46b]|r3pM9?M bZ0p' [T1"@Zʍ>jX{];p`\SclFeS*ajfx hNhc띵+7+VjHǚAĦUJm̤y@L@R-ź]ۜYg5B݄h-4>x [ŨP kb* g]`*g0hl =+ čvz%p{6!,6\b&ɀn a-…PqZ\,*w. q@C- cV4k& $Cl.X,;wSՒZ ]\ <JtŢB`9Y`j#[Fo~/nlvtXj^;}M:nw l?O6[k-yt`{wyco>bAR6ح6୒ǎfyuUm7݇ws o~{\f;n/_ƿPO_F=}l+ܼ۽j;iw}viWÃ.WYˇ/nڎimK5faG[ Ϥ:П4>#Ua3yII 3n/' 1,& dؚOu:$e$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@"HVZRhe9I!!b@e:$y% tI 5+IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$7 fIIqgAܨ)Q@@I@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIl@ZyAI 6LR@-SO K @($$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@tk=͏7cpsqC.al1.`I%cl 4-&ujP:yYn <CW.ǥ@PZΐ~Atbjuz)t5І@$Py@YȂŸ `[0#] ~1X Pb3+6j>rܨZ{t5PR:Cr+{Wr] /Zfp ΑՀ6*>u(:C d/ [p),Zfp Y都m>FXv?͐R^(jr[Wx=n{3k8ŗl/]V?n7÷V5&3?R@/Ux1ߣҋRjj ΑqpK+ Rj=}b甒{WHWE݂jq1t5FZd>J3xt5-jaj1t5Ũ+?eNJ HW1i?nm6oW_ۿ^bť_OT>'Em˛V]ۗS0.uA XHUX)uiLtTefScYʖr)DT.ZRI9i#]Ite\Vkj,RuGiGG)׏ZڜՎ,ǾdFWG+h^ܞ5R0</qc'-Exa@LxO>c&hn0L&ZF[E|. 5!As t%/{V% ⱃ@ gc"at9{F o0crϨp^X%phu)*eQ22GP4 lYo5uϸ1gk BՑ$@ԅ$Ac0ѺlcL3rgzB7cF阙\Ͼ^MG%Ywq4.MvTFOU{~ԇ ϏgUbi4;`DZn{^‡=Y:˶I|>J]wՑ{:g?dTdS߿w9tgo`䍞a6oպ}}7;~-wa(;r9K`=WZجD Bj*X@EMOh#e,\Рll2z'aE{3rgY3Ϧ=# ^WG_ubyNmV~4X=W+xB,١dF%>#RPZAzT&l2HJCS!\.$ՌU>V+kPD4)iU$R/k53r얨;*㋭ri:bNn26ebELoef-v}#UzW[,UjMooٙ-]az&>܍/?Lzd>ᑑ]n>ie|~uj<6Orۖ~69T?.3#]3qOmGEefGR3m(j\QɃS>'K"X-TGEw@Dk%ZiSʩ AJ D@봬 +4#gгL"s|F+/&tR{a6FjRmyȪ5q6zu/n 32$-!d*̼XuUv(P(CqYYm&cF9 e%oϤy,ٹ @,YQBH(j/$BS9CQP<6~^Β稬 S;)b<_|[tc9kF΁r߁:Q[!"*$C$ o*< Y HbJz4ˈWY8(БBDܞWRQߘ`"Fg\f & s~)A 58#8#PLe+&ij`' vZxO0>һ*j'-_HJ̅J˂(deGX_${g2aPM<_\Rml=Jz4ă#C܁^OS/WC] GyDtqhQ*Ϧ X9YDG!c~w<.~ HM(ye2Z5d:{+C 2c|kyu~|b"TTYŮ |)5SHRyJoA<[럌?#i!"X"9 V#Y֑Yb`1&T4%A.@IdI0 R$SѦlͺZE=֪l$EF&(HſJWƛN88/YIDlAd87BGR|2t|:+pnqy$0D(ɔ`e%C%H"teTL~@IU3y g&mqza`Y@@eѤL!!xSI۞EO9;׶mߓAUd'??-dA[N&;`.gN'(,0,8'/X`QqhHb9Cߔrvv3"yWR3JϢ"fE%FTIA`:E&:Z 9-uPȚ@i[L9[S>zr,UD@Sb4m 7،O]wso$u]^nTc4Dz6fV#넷ZV M Fg/XYmC26JPBʦxͿYD )m )M9vYeXgv!sǴu{zj:ڦJj}|ۭ*̓>qusH؜WRIVfP cBV(/Na.YkW|&Y* nIЌ;6{JzTTzrG%7BJQJXXI֌9@3L6cuj ՠ O wxKEc8:O'ѡxz7ͯsFg,n!z#(&J!\C֒>x)Xb_4b|S.P=Q3*% 9naNP H7ٯtB&hfq֍`wϴs!%I,TG(ѫbwBQ* ꋱʶE<f!#C lE'k eʊIc:?HŐH,aQٯ{^r(Wx68FA#qEfd')o#\ Vt@%H F ^ "Ƕ1Z?Au%)&2ANgR5IH5FlF~xwy#ysu6ci͠3q(Ybs={JBQ'팂.GJ NI٤nCX T:[ږ_KG>e-ѐ0> !4'~N^X5'Gǐ#b@)"DEfjizA=W}Y} ^z9U{y`Q[ dtMŋ*UW C9%0v̑SA TԌ8A{6]dh#e, ׃eT*ƫ޵>qle_n~?rm:k&TqQ]D$aľb ql4>8>< O bLlZCgi9.g׋jS& fq2-}S`l:+`$v~c]9G8cZ}y'X4I p dT8AdpRf ;a:9S<5<舰<8Rp51p/5)H##Y' $99S1;OR$ e' c`茜5b{;-Fye$pF.V%VWN-Vtj?aR,(HEX {&Z%f[O73Oc=#Ϸ*s}w\ŷ+f]-Zxv] ۈPԵf7q@dGR 9d>78 `>9$G2sBGi6n㛘u6O16McJ:VD׃-@j8&`?to?O|v39"]кݮ-.Ml6l!NɏX/Hl<͛)aӥKi˫AmIۺ&oV=VRYo\,7zA6Y`{A?+uE Lӯ˞qqT3.͒Vmbs%iróC>zRq1Y8U9_Uumf4o .lPFq{"*Gիwx7炑IѴ733ퟯ]9^f2 Yut53편STƠ%"RgH֎Ϝtx+a|R<:=\!աxZw6v=nrc6EnW%t~{C?7ԪνgܓoheRhK 4-BCk pgd}7A}؞)՜Tf(|56\ΞV.+CN")4Ũ:Gة`J\4u)ȅwϹL*&&GmiĮuFΚ{s4.9#%P–#?C[cߴ7t[sNXT[eFwμʵEJZj*fDh ReF[#>''x ^FRtlZe89m{v?{X &9m9* jtM49`DBݮnqwjA=JC}LY̳h3t)kr91<$(B3#E͜HY~?\ CkkZ"s8 & C Tfk ٗ)RY$NDngi?d QPEohɃ # '4,9Ẍ́U?&󙟤bO{f2-$G L &cGiZ`qiIrsLXarB&ϋG 7ՙKWf fn"-*|*0Vn'x3{Ԁ22QB9a{gdH+){n2Ibo2*9ӘC)1ZY׻f(߼k9c|z XSS|~߼^>7, g7# gsbN0 rlH` :V |pO %IZ[Nۚahbs80jT`N>\zrt0]9Z[eduNWFd `?FUOr'tL U*Rk~]1,ǿ÷o~x{wCwa4H`'{ =i%iho47bUn|v-^gB;b ??}=? &.fխ^\PB"ξdYw-MGhbPU4Vb@L_%fYDִpA^Yq_x)A&9\zxfsPB"Lw6dCP&0:trUF oh4}nwtg~jshsX՛jًm<jik @KY "fi2%,3W*Jߒ66;٤{rܻg3V1FHvıhc2KRReOr2cTmplx&G+j:ϼ*|f=evONj%0LR1#P,<.9n=JQcn{ F 6 5LZ<^g o=Ybfn 0H¥s.E:kk*@=ƀż?Jֵ`9 FWQh$$ڥo¯m}q+f`%DtE}8!{ѧŲ6_Ws\u $\IPFQVxF*zy!1zKZmQСF쁂mDK)aƈM,gaڑ b %a"̂NEATe3&#jUzetGᄌ>pJFRT*p [Usx}ybwytt4,lPe$UHIXNJ[.ƒct+X ZY!Edyr$j! KVSb,yTJp295>&r #93DR.OśÍhc80V2Qy ſ)~b?;3z@uFp&/iO=lCMW ώ7ߊU6 (9;)iUϊ"k ;BEߏn0{V@\Ïl~pwB @+#;5+3K3WDW|v?BWtRx+TDW\" "rtE(J(yMtEm=tEp`ntE('3xt%c+l\5tEp]5cW+B :FR>ZuB)81ҕ `"3Hh QJ&:B‹L /U"V"g:F|| 3P3^%7|z6 z-giB jci˕QӤWftgؿ/dBʘZ>ҫv?c=j38؊ի]DžN\W(g1"GmE:V!|nݳu^~p~h́#'=NtҪ~p-;]J]!]q˵6\VCWWZT:FV| `+\-t%A=v"RMtut%&3HU=cWZ:3vB &uutFUDWXI^ ]\]P;ujGc+a5 `µBWV J#':Bs%] ]\Q5lte{f#+A87n?5chS[7ȷ害|k4h{i?вtoZ݀g΋W_[n n~t۳4c_Q9 r?^ mw~(fs\r]9EEt c"ʌq]!]q ?Y]!`M5tEp=]J+':B`Vup臖&z "`P ]\Y ]+BiDWGHWK5All5tEp-p9v"r2HWiXEtCWWT33Hh;]J=c+#5ZUCWz{I? NWm++""B+B+Go ]#]i0Zw{ f$V c{I|[.:&m/m5j11~Q5͌l^:kĦJ[-}jY|Cj +:9,+x"'h;9BّI_KI`trpaMC 0z=Q$۟VNttdVDW=!+l-th[}=]ʑMMt>tť&BjrS ]ZNWRۉbF [ͫ+kQWq;]J9c+pV]` LBWVLMtu>~r՞>kܜ?}6s\ҿ x?# tM>EL~ϋ6_P5pr|映8gNiv ;C\` }d)7菶Y]Jw͚'_ίNnm576Fb xO 1?-ޗEmj]떳KX6 v㿒x+T~oO񲹕X.sgZdH1x("dVC %l)XNN0)slXoP(>Rz?VO0o|,wzHmłT:r'_H2v'ܲ 79vVngi2V?u@o;xwz.fm{wm k['/W¶7^ml魿:E-q;cۚޘ}9xψ:\NO 3g8_-)e7|u^vG6f'+fK'gWsv'/[{1mNr`W033 {g U ]YJϜM4R@l,q`l 5IRmT?_-B^uH#V+P mC&w(,/S`{ΛR턀BV02hҡ^ +HB9+W sbE`q-h)U,Bg`na2p y$UG9ܵUߤ-#Γb#Ӹb^o.gxo1暌sM>NCoNi?'M;E:ɛț+]87_;|FEsy1讍R˧2{.jsix4F@cUFOm;qwcs/*7r/2{g*ewIsQ=& \t%9s/) ޹A$d(aF"s`Cʾd\ws/yVdȬ`ptW#3_-~:"SCO>_J]};YmhgoC??c JfR(ͼ E))pX1'@VR҈cA,BǔɅ%HVZՇ\a #L%s/ǡgx|Ucs꾃`hJǧ[3.u~ŚoQТ6Ew:tڙ鿭gi~er%j^y6W;}LN|M#oPVUq(<1olsr6o;v*!{זL,uL:{<ú!qrc =}f\&ǵ8OXIKIIu .zhǛ<fjS#n%h' s cXjΉ!sb91Rw˞Ή9sbeƸG\nzy7puFj˯y`hFpMp5Y8> ʎ o7.!+ 鰄AdC{Y2&I} 93U :)$:"4N {Gn@Cre X`I zPY̕3C2Jr,Ɂ`ǚMG3 /t<6عnŦ[;ҭuL1FS".F;)yJńPra0WOX[̠f)YA`2ȩ$cil(ByMFM ge08}NeLµRyeZG*Pk,k oDeU$\- mn?sSAC^(W ϲyZfJF3ZB>Qڦ- Mrdc:=wi cN{~3=ou㏱/ZíOPN((}@!ؤf 0؄1^|`˕"osˬ\r.cBRĞY7ya_BRIQڀ΂1Sb#NP͚Jb>h!&=eWCpTfZ' adL42ZDИj:F֋OIIP83mS6l BeN>VAIRW.)#<8*# .{2ov&ܕSCJ!̬UI!e&át`UXwHCSO=BIL 8d -EI| ،iu ~Q\2/S8 q0MCBɌҕXd5aP*+PAH _tsU|kAw:w ߝ;J 6[ZOuR*Zn}lQZN CU'gj݉NqD/7I%#?g.S6BRpttRZ)IupFtd8 mqGE9T>cCei&A1D=d_+P͋O}Y iMо]G."WMcקZuA-;9OiGM&+Gl,¢-hO MPڵ9%HB&N ̵^{fJx%m%vՠdk ios贁(I3 R̐:wQD2MڤVY27Z:VP0v/nKTVd}y;)N4Q,9+{{bʎi;>t =ųx JaM1}XU=#4,mQ=@f9;_3ĭmw_}UPBd![Z$jr"_h1hAM[:&:{W ElRf(P+ƊȢILt\zr4E{٦)0]J+4,Fg3'Ec$:MIFΆo]B>y )~u]hgk|t$wz@JD.j5ZQXFb q&jm %Vbоu9! .$Aߚb"4!)OBbEiZ5[4v"gti%4MK v!UOJں^I[}|zs99߰uXĝxr޵NL}+cs2Y{Kͪ|C)ͨvp*Cf-gV`IJ-&I:# X2{x @̒  1.(!)n$rfX *հgl2 W{_xd|>pv3 O/b@|6,I\1 18W)7&B3$P479BUĖ e(R ) lJcQިhl;. r&YP&̤;\8#P 2*KZd*[ 0s1o7ݦOa[F1:rrޏ[s4>?{8mŏp)y8OK`VX@B&7 h *jK)ivy&`@#J %ȔQdQ@T SRIA̘זWnU&Ĩl,(Xv'%l$RsV *C5r6? @+Uf#&6h.79-H-A#_H0Et]N3nx<ƈ`aEt[K#)hW`)8e! sV1;u:3z<_haКgM?S/JH]̍\ %PN9^nSyaTA=O[HL\"EY$AiJ.7^ۜt2U&ڜGR_O }'UT8e9>H)mٝ e?JR /t$K/@GHY ![MV" %%G)|`deFDe`6nۄU{Nkn&3wD 9~h{=ߍrl^W\T!/-\7%v*}2^$uQ-)̛Y_]4v͏SSZ~<`yfWX-?h^D&>. tq9fq\g12N+~2^oV)Z-JL Ӥ<̲ Re_{~|p~J1M.ܾG{o㝂,imni'^ stlgj0^hxߕظ;R1n ~qcOc>f|;&A1Mqv?yח/|M#vW3v`*XYrg;&ai^ ]Cl﷑Xu6=scvw=\NϯhMc^}-G=Ϥ=.Fcf»b'f 6Ena1Gxf\_N]y+1_8xg,W9PȞ l08FaJWuIdP%-ebBc)R,cY~ QAu= R_^TPWc^w} +&1"G`٢/PC0y[ 2>9S*2U!Sro2k^e^Xi|}ZCG?'?Q6ZU6,E`MMpO/C/_x*Z yg:#zi)Пdz"d;' _Jiz#/?Kx;jQf9EG)oY( ;9{_de,e@Q~L>at) ]<Ml *$VZeM!oIFrtݕ6!Wi+Zlzfl;(]u|ҁȬlX 9<ɦ4)9z/'a8cmMR K'xsi,pAO_ȥ!8qʂͻ t Q2Ur޴HXTɓ## Xhp18spV(g*' ߏR3+< =8r~:O_ xs'4͛Gr&B曼tU2CfͯW] œI <%)$So»6cW{VԳQG(>d&8㒉Kvtj\#҅Gi7VT9.ږ4[ohANy͛iBhl/r.4rhe_x~qy|P+-|qQhM_#ig%APMs6>vn,؝lyhbEv>ΣӳJww$j?_ ~blh&gR.X>8]fk*AIQEx'Mг9/nΚY[?kCݫ`ndV^FĚ珓i({Fe,0خK2۝3_{Ϋ$˲cz-U~زT*./_=z!-ߔnʰ_'7;<|/ ڱC{fsxi?}lI嫯vۅ2y&rx1J[ڑ3EiU:c q ] 榚XO (dpnG+Ta ~]$E9YG֮hZ i'=u..:x|d]JjuU"j8!Y \|J IE5s^T\do6V\gԛEĶӝ4=T^ix-Y4NR1\uJ> oOćb+c[&&DQh (j&`eI.U °ފ^vO?VuDjW匜ӌ,Nup}coP0Kҩ_I&ל#J:2\L=bƏO{ ީzez1ނ<^'MP7<7R +̿'m^M&P)tGA.:(Ȩ"g~lcy5^ =Wcqk-)ҥ"T)Y1&I1RimB>,1V}%ɕ*DQz`-W-dZ¨78 ,K IV_|G'&Y}/}#}+ϽK0 LktIwAUmё+ܹG{ =\}3{{-,+D'=[d*d2uV A㎒q]2)j> jU!Qgf.AT.Z0פJЧ JpȖ+8 B} 78KFǐO%4_Ob?l˖ n8.<\T2^,|OWORf=jp ]5~PڹCtt芕1rH]5v0kW ]+dat*'][ujp ]b骡Դ'HW0O8؛j9uВuJjOWOX?}1} >745nzѫq$INHFz~wɥyΙ~tRuTBXu Wi_M)"xiPL,O@&Q,uzR@_l3&ou[霉D5>ۤbL*85n){6 ˭WfQzyM+nJR S@ۜk];<;_+BFMWկL-L]T;JwDUeTL-n^"G/IIQ'9wtdֳ:2ٖAk; ;rZPsRDvɧBV ';ڱZ?>Pf[@҆2eWN WZZ &6Ud#Xql6Y&Hvix۳{eNf6I<\쇐֗:۝ eΰB+*Hcsz{7kswr9{9)A(8g_ۼ9RYMkh#,^ɡ,"4f(7v,"%^=*ËWkѬv[krJ@WjOWvn@tRUv骡$'HW % Ҷ`VC ]i"~@t֘UkP h2NW =]=AhR`Ukx(tвuj(vЕ1]y~틮\BW -UCHWl dY\vКwJt&1$gz&]-ˮUCKOB l ]5BW yg(7 Ҽ{MǝKRZx%^tz%Lm\ÍR=  }E SMNlrZXZEB#^Ug:\$1bX=KWl<;Z뺡tEoG7%^?m?h%^o{{jp͖WCDW$!v ]=]]1v0tn;aJhYҮUC9ЕbxH փטUCӕb/+m݁g^z:Z ]j骡4{u$) Ծ\+CW@wQvƞ^z-kƙg9kFk}<<:j5'x.oozR:>`s1ڋA͟,kve|w_^5O:qGϝNSl_W">#^:!gɗT<u0مg&gW6D+dz87.+txyAjvMn{wU]Ő:+nvj_̈{4(|)^<0UgߘcLe3j#PQƸ\zܜhp7 k6g{: x{88Mgz.:noAP_F_"W] Z+jtFf%GOE,2jMe$'4#l+y)$H>n=/o/0??/ \6W_:=OrtdSq5ZnN>mgGFa4I k]֫"IW>Z&!i-B2cV)kU5.̒CZ;54JnLCZSdK!<\y鳵0Kho6A*Q*9΁ j!ZN DkK B.:K%sBgxITB5Ue1xlhѴVӪNߟ@R-ZI9*Ͷ!SV@FG()S,yS$"Dca!%l> 3* YMb1Zs#I(Clh c&OjV2[Q !cugІVۖ_ͫaB%c*0_@\LQetȵ:[6gB$W•9! + 4b&iw(rYThԴ&d#<%} [\O\4~['&-Ͻ_7BmT%iEXB SI 2Eމ ^ƠLΚ FI䥩hx)a;A)M .`cZį0R:CjaZ(M`VÔbDDd/ EvQkS8ecًL b E`!,وE@ #5+(_dXZ AshxB۱.3жW3pP(l`#֔(́dP&ZJ91ОS\q,Y{eдP i+%U33.e]\`xV (sbRfa5FNVP*K\1{P2[2j@PA, R [L&k UR%B1% F 0' \^W}Y(fdJSUP,\:X,*`2YVjt^ 6cz̄R ~+)f@n ¼p5P VCH(PyX&TD+IT$%KUE1* ƒ]c<½04Ǝp@C\-Mɷ  ̐6K@܊ *f&dY)ٻ6,Wa`R}$A`AuW͍LjDʉ7sűMG+6`K橪[[U 69tHSQ%\H {;9jAV a~NO`EBꕶCvZbEK]VkX5}Ԩ AIRDeh+ym m^kZגv<C"cM ep ۽VQ=|됵"?\ Iy kmnk],"&9i|1&PTgbIJ$Bm~Us?`ąYpUf${ů6*z=z L !ГC^"3Ji=B.܇Ah]2" Ȇ)Hys ~?X|E .3HJnBiD!ɄOWueH6U yh$/! Zeޜo B/[M}tb›^VWŅT\௜FegR,M^Yfw׸'+tҰtslޔuCYɰ}K ԣSۈ(S%30HbH D̽F Dpc5:kujJ B+NG d2J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%*I g.=N JY:V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+~J 4*%hJ ?%cu%j)V#3X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@'BKǤBhYpF N^ >@g%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J Vt@.wח{ٜRewqfA])y^w?7#D `m[p GQT#\ RF'!\zmЊNW7Z芖3v"2] ]hpH9 ;]J)ҕY]]"ZBWƱP)Up&a3>fQ<PޘrC+`;Cð⺄fP5LZ=zG?g~zZߚQh3n"TuV[k}J黨f5 *s,.-ʗr[Kͺ`[cؼ?ް3}@mŐr)DlZFX3͟n1Fer&\M-ZǞ~29sϦ7~6R=*"yZ:i Vqѕ9 C^QW]Ӿ"&BW֍LW'HWFUt>u2xڏk?LW'HWZkJXS ]\k+BPtuteqFUDW쥨ҵatE(g:A[++VCW6Z9v2혮NR4vEV4hP[ ]m"ǮNЗ:QP;U"S"`$'; FS+g#TSڟ %}te1ʫKV󪬮gݲ =ot/.;ͪN@w(LߋI%^mRl;:O:pӤ(n`\竉{5k~7qgs٧? px]k(: ~9Pmd+tЦIk++VBTCWWZj7v"c[QtЕS])PuDW@+M;]j++~Z] ]ڧ01] ]\"`u5tEp3c+B9=]ؚ^p\I? ئt嘮K}EtCWZF3vJ% ҕ:<'50FBWֺtJ{Lp0F1PmoBWƇQz+m-tZT]"]YװyPwWs2%` t7_^\ޠ7Pc}vSZ|~6ó7?aGS|C]g2y~U |{{ξ%nwDcx7yϫɋ|6y[ַM'KS>q /fK@ɏ)ɪe ۚ3ڠ4Xk;HB :>6٤OkAzVrj"'*P 7Ԁ:џHsCް^fɗkhߔճ vigx'>& 4jjb5뫉m}FXN0VshS<#YOjIpUftE(=)ҕ.(Y]`+kYAh;] NW#>3*L<^;k:*V3ө?;jneSbBEq97T3Oh{ '7(dig}}-3CkF/V!W~߳ꧏap։'C)G>@WM/q&"VCW7Z hc+B ҕ^X]r\'m-tEh;]e4J[ŝ?\{[˒'WzIfgzt}Wx{&M]4g#+Q711& .!uY6n!9|r_4u?/L' [R k,uo̻?/WJXfong jӳom}1CmzEZ%\~t=3h{hjlQ 3m^0ZLn2MLo܏ؒrL^~󳟟?! (< s,_>o{?}yY^2z zM"7¯yonywj>\ߠ{J,yeꩍC߮>gp3>$ɏ|sN2W|nkuv}>Z o6֟|3n~> Ţj^sѠ¿._&eݴHAN6(b2^6D:wѶRmy0_}Z."ݵ[9\z[ 7mg"/{~vK .L$Y$G:˒Fq߯حGKVKܶ&d,W,ucm͇|>"~;sۍe[a!Pgklro *o~\zgNA[th'YA+Lg=7٧F=" 4kϤp}l8.D -`UpyV + ^y js|F [ BpA ("C!p7+q?;/S#.e7$pb IJ6 HJ y7J#Lk&n`> B!*mx1AJ]\7gln;pA~O@|,E}^B'?/ ثWNGNR֝NyhK Z曭oΦG6v=eIzP-_yL6Ӱ";,ET7f_z-׸,gK;%PU` T*@Pf HEyz\D4ܡ`XNẌ́UKΤ m𯅥3gȄh_210>tو-mi)#$H3Swp09ymoկ?75ߔ&2d)lԓlg 6d(9ÙLߝM8 ^ž-fE2dK< .@څgB?f-r>,ZC\D,fr:#WN$V–f,r&8^ y2='gW*,r3JKE׸Vtq* CꠡY,uJwLsŷ:8 4Ͱ0WKoOݪm]m@6zZZxuEAHSI4$撮ahXL]Y@ q0,(l܊y%w,]*WXX*#KWԦ2"rL\\Dp\l:yꜞΰڡ[LݖBʬ7^w7:?~߿.ۏGpǷΟa&EQ?o~ܿhI*˛͍اhfg GYrU3@xA{ojyoxj.oTA'(6EkeQJuIW 1a@0]Ϲfl#ǝ *Zeq@i_}I^jkhM'Vl doKC} -ݙFT۴^;$kqA8!k(BMbP< @[g79t]8Agmϥz;q CBl> ! ƲmmXƮ*TĜ Unj? )ZjK,N0`*;t]3qQ ){K%ՉKYJ{Z⭉^NA ݂ZS5t */nFT; {>t^9ጀg8JL@,דd U?bwcߞy/L -؅N SДR!'\R'Sz"h.磤\&w~E臓3kJY܄:5 BP9M 9ί%&08U nZ 19k7"e\]48RћKQ Ysdf6}<ÛAQcjV}pU &ga~2E||=])vZ]R60GS7xNzj~%INhC{d~7ƫGε!o67z>:~2d鎌g{k!͟&ZYFi>b5s//ͭ=cw욭d |!k`[W!⚆فbwXhaVJ(F6*eu1 (0H0̷"Oc/텔] GyDQlkYBsdGl*AZe!EjUĥN_g,Aѣ^kȸ]E Z[6~Z"_>?s4%ܒe 7%OQ|LKcXiB!pdA@]C3T _=>~>gڼHRr[N4Vd|[RWJ$ -_B^PXTL9hׁ礪xwY#շau<ϑv -7t|1OyǝRRN^tG]ۆ+6Q`R<$<@qít^]1HAoІoI^:==ڇԈ۶y! ,~=iMݽ3U8MԬOqZ|ŸWB`Ds_gUa]HE6n Ol+WcɷKYF+u#@^ ;Ǎ3:x+g/EL$?49R8syA;]2:A?s0sۍҵ4/"XEv!*Я wLu=!BZP޾SL94w?" +c@Bp TXf8ASS# Cim!m9@+E|Ŝ{gX{p1rh7#2Υn] $Iu&AUIQPO1^o&<њ8Mc8C+3W1O|VkU'ogG^X'B3o?\t:ox̷IYp"e;&S [-. 9RpvO)d5ÂKeEԈQ٨3!"RgY$c^&6j70AFXӖk3h 1DR%Y0mnKxq-;{pw(/3XoK::!D}$#uL`;N0=rL#ѯsȮ)XD2\#^!7ѩRz%QFo Am ẒۀPׯ3(ImvE©8C&ˇ 'GbF.Gߏ0j%.Cs:|\=lqr:nн>ߙ/YWz<˨ز@'j_J*!76%;Rϧ iWyG)`>lr;_V͡J aUpDYZE 91T <0u}VGzZ6JPYυV()5XC#w&@( H}_Cx CVT6{u#B~Gn&|Z;ߗvV4>g%Иа {8+`@|RW9KYe-~bG A%훧z<{<{mg\tApCE? F(\wټ~G_KT-N+cP9 Բra>TQY+;ZcC'zJy #wKAhh"<$MFɌBYq̀KR@1:< PF?gi*u2nJQ-obمuFYeQ_p6_rLR@G#32SsM?x/:5!#u}f,Ңy7$E<~;nM5!G'ŧѿ8~ogb9߻ۄl}WN]jZՓUn'8'LF8qw77qpg5LŇ;{<㵁,0}Ej`3}[.ƙ9\3l_\W;nԹ Kܛw.8̾3u3n*޳>h".*/60IAqOFkȡo10as)$ l *X>YTB[cw](:AV,׎Ya%F`.3"Hͽ@e餪%{i]7{ݲvaﱪikRN4dC?Q6gCVT;}Dմ|h^t=Θd +HTXSEn8>Gx#/pY<g IA9U^lNa/~|ry~r=ܸwgXhax񅹵Koz h+*M,,Tzz@ۭ#=RLUkf H+$䤎GvicLR8@StϖbЏ1OqP&|< ꑣ5X֗_(+{-s(dT&c"hZB2"b ˢ$ &)$M+&ɱE ILQ)4e#H %Q0>\GU)Kq&EFcRJ¥, D'Rȉ5#gG9 f]ps{KH$*dQ !vSMh> & "PR.8OFk |D& %H)s\g*J"xb0ʙZ~~6ILv\\渢Ag@T@ uDA0Q/i Ԣ Q8s_'xӊOJ%NIP3eq  P;c( g'+e>H[*'yዞEȾ _KijD]1L t"X@%\GhJ8 4gģx,OqȾ KAX=3TeO?6$rʲl$& t% b9OfUXYS5Uۣ&$y{,N)S6^>`NO$m0r {OuXj68G)d}QZS>z:b踉f9zw]"V碭Q&cLrZ^u5Ujl-x9Tmm_g>͇t/;;jzwxր1`/^bZ%\J\lzz8DzUo:@\Z+GDgW`c \!WZWJ ٓ* U&W<*wT\@p%+qZ`WFrZV9ѻv|'D'ya ǂ>WFwFɼp6ۏwDE%cB3@?D\b~z}4&Xcx˒mS}o_tԞ'-+/(3C5=~BVMbLݡJ <Sc -LQN(>a]{]o#7W~Hc>{X`r"HG&߷ؒlveʖf:H2w&>V*%Eӏy„1sBQ>Cx "$MVxbɹ->Ngi,kl iTVVy$\TpJ"dʮ_ҊEޭ ˬ!; x17%gF FL Wĵp*\cHJT}1rƍVo'[uj7i+MJwdope{w9w UU+=*jvpJW䖊"'W$}ѷς"k%!&=\}p%{\="q-?*JypUW(}oN좥<ץ .dZԒ.S /TtPlhr{nurl}e㘶ML ysi k T|y{;7.>ǹTϴ07GKm0sx4j5jb>W! 9xL)[Z\k2@aD ٓ;sXnA*qK-y oyw_tD,6wtp{qu>;~zڔZo&dž_>.\XAG/ܞ+[-Z7i[>MZmq4uu{ZvsşI&wd8wD}3[ꉍvz+yZ {Z8"Ϫ,Ha$5(+~RAK HKxpnl4ޥ:1;&Q{A:l(~DMx&QȤby%9ČwRFk +AZTѧ$cR).T>JT08 Y(X;km"C YTi&1BfvΎ:-GJǼ~#Z>%I\Rϓzi>v>zu\L`;T>ƒ\纗KѻIDV{4^w/xUս4)ӪUp2`!Efd.sީKe5q<\NT!`Pjaڐ/g 22`:d4/Osq- Rӿ1F?).Ũ2Y m tw9msA MtT+gesj]UZ6Ocz/#(o6<¸!M2I1): 2=>~߬,)~.w93a=-{GM*pf'Kd"m0$EqfíK2Yq?Q⇹rY35KD#~C y>%hfO#Wjkfg=}خS3e{6tߙ&򃆉` GWϗ㋮b%O׷No:f}^OW6SwdG>dm9wN邉[!׊6鶽K_šCQ?Z2s:/w7O9nf; Н\[v>̓y0׌ 3ouG7 H+j GctܸS/nݶ==/yui --'mtƁ'm>MrG1@x+ )7"hJ#J UFTT ,ˣ+Rs,dxB2(SD+r9IY'$'f^8|FzwWF5k/}❎f`z3R:szuF̙t ]N lY9{} 4szAO" O 4yQ2tj@:"I!ǒIV8^NPdHRhH&p vF5q65̽/u,uMMGh:!|S.'m_udt5fi#>v}UhJ!v s~~Q_+<)}(gS*W/XQ"w<0MLEiZI笑:E_5 OOK(=e6#CrY~,ufi[1b]r  kL!,hgkiHiDOkh gml~ [VD#M E=$,Q}$5=v$e :(eދ( ^ D&+e %5!e`Q+a}Le{Gq~9btq= ;o}Pޒ^Yܠ6jˁg4}@d7U 53N0dŤ'L&<&d$FH%N`0$illC Ei0ykS5=Y9+d[?-A<:t_=v/8D9y)^)AYn8E>$ɖ\E9 zu?5XyA, IF f6grj\R8R iƾX(*cXR#wu]<㎜-zj8 ǷËtfI"zyk60#FJ7.r&ib[X3Cйr-2MC < &CRؔD AR)ce.9̢jxK!池v5eeԖ=jZ?3i#?fmlCb`EAxn=BfOJ ]׃!*gȁ(!ĢIh8F/|Edf,F8 9Ə""TFDGuoቡ9)<:8ȍ&G'gD0б*":#ؘChE0D I@F&c.kFz!|ʈXM͈x{htqLgմd_\TqQ:XfʻGn1Gd4/N^GkIAZaux\<<⡮z?<<U$:_y̺$>Y ӌLF|%y6׀I L$V!_kң8V!;=m1 (998@rJJ&3- Jspf#"pȅĚ.u<{9_N)SZ ^S˫}Kwci&߹F߾'iY* WA(K3x&kw(&H#T$EA;2AF='QdQ@@[ փOb7)2rr63&(2g5 zx3geIo-p;  "jYr] FšB`mXV. tӐtg<:z:G/ ysD? _@HȮX\q.K 8o"S`X>+#[ʈĔ%S!RNq($mkIE%7j[}=?XXW]8ghP;9E#YEۨ텬/bA̾pyBiЦ,)\d6y$7HE %R(DBއ{>ii϶tn)ƣ61Jzٓmn&ڮY.BFi۲Ok n@6_R[wGI*u<1۬?=)fRўQ,dx/  GJM{9ߓO>id!s[}W_oUi 56|Ջ5mmس$ȋWWշﺄC8 [Qs ̛?{o4,3! )y&<6&`Kw^mPY*ߏ[}&gn-e޻v[u^ ={ /{=*4,܂p<>}ɍ{D?s?~tGL~t30br/~?8obXÏwv||~vsV:@ iBywe>~MwyL?=x|_$s>㏭>p׳o.9k2 0X)ra^.nhJݞ~҅\N.nhtL="KUUo-{p;㝏XLMK<bgݩ{!j^3|E ^E&ܣT9,G2SP0cU̘yf sX>= *z+f ):d"3{˲w1)o @HNJ"*W40M{(y4ֵtYvGU\\zWCR5eVR:f% ͬsS!4cZn s]qM]᪆eY#SV K8V){]^k~:Q1j"d],T*hT^EQ2KSFD c•tAZ7 S"|c(ā $$o R1 mst@ٲM&Bn}"6Oד:7tEٱSY[1M),)"b6!'Ђ)??ٻ6$ Q`|sv [CGijD*$e9CDJ4F( 5=]UWS]6TGg4qL9K<$:y۔l6hRȶF &$Cv r`)ĜRT"ZN !9+Ȭ0Lb/L1i4E)lB87AAtmliM VA&ɷ*s!uk`/7Qq_P\g]oR2ZbN@KW4acs5^s[kM{=/Uْ҆hLN;bٴ^4D8^ZigK#Dvi!#'6Ld<1I"u@ɼ-Ykg^"Zw$nȉ;d$9I c@v*rIX@X~&{hS>ui{N?gі GFp>qֺuF1hP3%W=jjIϓzLlb"H5[ *%=f昸'?"$ G+[9m̺٘,Y40fћ"yoxpXkhiR9ZgRɋ+򉖕oKijL7]1ZPF10"b9*Fx4 h* ֫G{cuBeb|Rk'{%usɲ|Ƈ矊,i'\j0FąMRRDSRXRR䁲]II,DJ ZkR.qDFf77k;b!. :fHtGF"x,2#::O8i\ ('2h-Ṣ ery4qW^甛 03h M|eҁzaB!EŊe'2 ciT)uixRd39Jԭ -)Wn'9Y\%rdM#zɴ.2efV@mpz됎6 }Ub7p o+z8ZjeDRGB4P [[0UXhLpO18xcxu;nhb19޹N;.2нbM3~#3Iٛr[?yi*t'<?O`g\w#Z&X%fB>FhOJ).=NI/ti׌Rd =kݚ8K0`^|7ctD}9ҹM:tmv`D:|hxE(8klâ9wCͧ_6Xg~>9? ,lu֜Lw?VX앦uW 7{||le;݇c18xl핕N:7KODNL9eO?dyLlF%YrjwT )yL,+ rtΐ5,B0Gmm7mhMU+%w֭t{tiGODڶ05ɇIkW"ukKIV) !$Z"2dhi;Ҁ- tV )Kg],\JqόY3BI֦lr"&y%-iЅ-e99/+S5*%Vx>d-O\i#pa($ iaAT҈;m_qY0FXa![GYS|cuТGKq7#,'mV]e˄O(y٧iJ ?I BB}`PzK) /'Y}kXyR 6*0s <\e笠0q)Ebt:+Rˊt<D"ŀ7q  aМ.E'yp:m.y3r,d粲 V?ci(^~x'ibF|·%*T"G8F)8F$տy'\Tar#A=L~^\5,41L~&IK4/OOֲQjY2OO&{qh_59[o+rJ5+̇jWWKx~mTyۜxu>=x 6p#̩}0'Jw9+Zjß΋!>%0m$ŶjHۆۇ1|0=)bᨱQ4e' O>/&zr9t0:`7mnWTIhBGq,_}>! {BWXį/ g88;_߾)uy{Cpf`6Ey _}h[ --2nøZy[ƽ>zEiB;K(5L&56[k+U0Uit2TqT?\JŠT45&VMy u<7ru4E;ic'K Tu(j,(+tfcSzJI3IWK2.|8r8 /hנ$ڏq#?eewz뗾!YD6Aeէ1V_$5=`Uj@1)]W ثO$7v>{իo{gpt1_b .5(wGev3;v'"]|- =[rWڞ]iRM Cx4¬@ͮ-Ӷ.TRZs& :z(xڢi!˞y\TQeit,cP{m`cv ϔvٙv7pYoIRL8NH ()3#`LQĹ=D6q<_E1ptm)8&Scѧa2yoy_uM?T[W%V?*c )1K2$Qr=ZsP5O$z+o+;Go_I!4t[W1x=k(vl=y"~c0u~s1F\ֻr,n |H^I"}˖v=nd}ج:<.S6lVv\{~y -ǏJ-$W].I[yl5Mb_@||B)"4>;U7 #҂ tvڙ=*` .ʡ膊ﺡtz _npG6; _'6ip.]= vCWOCiD=HW6M{2UkP誢HWoXzi픮á P誢tUQ"tJ+ *\mBWQ}ꑮ ]u `K+ ]U{(8|te hG+ 0p`誢Ǯ*JFzteE)DW1 &vUђ;]1l]qĀK5U b(tU;]UvHWdCʻ ]UV *ڗd>*J#]Iuzj=SRvF4 wpzMZ_fzn?l94~]CV2D0y~yV_baޱuOw_TF}WМ0FYcJ2^6YBi \SQ! Y&&:9 {3aG}4;ֵ֡l|*Mė>ec˾lw7-yO%gCJbZ`FrCUO(-#e$gqMoĹYO OO4Fg'TB0xZ_;)M/-6+ a(thIUEHWoaF+CW.Pђr}HWo4*%p@tU 9]UkCW-ɾUEHWo]U w(t :+$*\4C+F+;]Uʌtʒ_Niփ ءUEWW EB݀KM P誢Q2tP= w&R=xwV ZRovnP {VY=ܠIiQtC <C -@uCEiOoQ78!isgXаn[Tm!suRSjZ{ۭѯ*^mvyl%v#$v'Ct!5@0 //QcH\mI;޾yȻg7Sw^Mޝ^;<ڄZr]N}?t{N]#K!N@Ծ`mAi6cfi_mAyvۮ4ˋl|LgLD B8sI) PQ6 ɑbZq1X2=Sɮ*L~?\uENQ+cy0Z2d}Wuv!SpM)T ,$WWSs^3gCw/vb3ZZM^}R{lj_=`Zkӛ?}=_]{eRrW;@'>IŞ=^ޭjک=GaNjIhʋ ⮠_6E=&hŹJKSBk)-l*`6 eXol"V 3=uygE;"ﯧySo@| >*" bѢjU~;`ߢ䢈Gn<8'6T)mED"_ne_39TΧ6w{XYr7JXU@6"KF|{t;1xGgrw莽C ͲvO/dvA|I^l.)+vϱcyz\f=Uey?]ϟ%#p7 :~N0uz<ƛ"˳Mi|Y |aVS_Iʳd|(ZmC-e[kt:k_˦g)QK%;=ɺn&ub>E)RRTPgRWWRDTk~KC}II9˛HIc)d΂i$,7/kw,B4J=HQvS`5`,it`L9 ## QIɎujgpשqz侁5t?l+xDŽv;!jC J3?B$c@r'OAkm`lHda uq_9 A){I%($X"XR@E(j DH= ͖B)TzWb)DIG)1.Q)}N0bDyԁdj ӤިO>&YrxɌky]04fe@0` [T Q>#[8{ _{R[5@0h R8P9"PWml nU:j&y@&dM`LFX2 _7L؁$"2E^Z/o*mNiE]O}lbܶe' _f>/ݼVfWuѣ㨎^I)>[%晴NVIҸ]~t YVR.:8Y]kcg}а~9ʳ-rFb z*gk"A9GքPgdHa,4/; N86#>H}S ]]/>P2s"ԙj5, Z/ڣ1oM )l=M!ͿYKD 6hR@FwZmei^2MϤI[wk=֮CTR;<:6jWh=8OnO=\RlΗ[Ӿ{9Lzʏ9U1.U A@sZc!Ql(M Do{ߤ`FKÞoK'Gd0 Ju֐ǜKi&38GLLtƅYƩ\:B5rrQ*άOaSqt\}̓΢\L`FASt.EimLβSD2&]_CT JJb,$T<-<sc 泴7 kwfڣ}PL#ebO"P.T (TSP^JyJXeU\$Ha!+C <:tE*+‘9I"f]/IIʕ{cS:fDqd'2,Sx$0:pd+DSDPg"_lI!xl̃C- آ)DlL1\CQB n3bgpgďW:0/.s)ٙʋc^4#/jdAQR*1tĈ Hc6)K(yEx7kvei|ۚܟ#Zպ{;xoҁluEyDܘJُh:~5{mw{I. Vk@ҫJ :Io4Q.Q>cnPxI//l9wICVC2:;YND0VlC7&҅֎9$O*hV.U/,םeT*WIg̪u3"\ǣ>6׋oNOi'flc:  l׫uŹurb;?%p~8MkP  7 2%gZ82M/x% b?\a[9!EJ83]_UWWiK=$(X!R;%%~$Bˇ< %`%b= %#(Nr.ϝPS@$.}b"Fr;fP<)M3),CsT9* "!,YֳCgl+KiqQf~ [$׵ͲKl~MvnTQÏ-~,/\HɼdDNI0:zaX;& (gDBRL:FЋVK/Mc,^⍒|!qJC)\sQ1@ eXb#KN$O_yv+J/gzW\(`){.TKGS` '@uA(~@L_1*hmG!z(Z Ă@dGx !)>Wd?.ƽkAqAQ]$:Z8UHIYb99OP ւ "^mA85f4U[q9^{8,.{0yU^\$S gxSbgG:$EZ*aq2)-9\ y1~B"Su~ROIή^RS/>-LfHw+ڶٵgE4^^M|F-IqR!0~jzb>jBLvv1=mtzvrJUGcyN-TGĕvo3߸zQ8  AT#^hȻ)L{s7.ѕ]o6ǯqRՏil'ߏFWG'2 ]tK/flMᡜ!L ņ ;3 z솻F,WHl~;Nyjt>ƃ4XS7LRsk~ RO{=,^evۼ1zlaV/zSaϝnM)K|& Eo:|L=-3HE%&sVud[LD0 L(-| &(5뽂1O+<: Î7QWX'j1+Ta\4BĘWDYY%uWse[\xꐏEǢo)Jx]{RNCku~AQqۍl\t=d KHԔXSFn8 }qùyl0I-Gػ3ۜj91DU>HD NͧԌP 9 - Y['6"B#Rƈ ;C"Ym\D"F224CB:#gc| iz0:e]u>Δ | d4dGi"mc[EvDS(rx)iXIA2b``p^NpiMQ@t9RPWJ\!N'JV^x\]7ꌜ2H5z{f9$fA}]xcE &yuDTFS)åCkEKa8"љ3>5l>D7/iBIGc$Z c$g1Jx,% GM8oEp1h y@ pQ L´GŸ-ϯ-, >2 hQ610Ni1Thb㈒*5|":0>~2&ůϋē­suq'jQCʚ7fI6ը d2aPN9:SsL⥻>LQ^ S6Z ْqB=K]$F:Ѩ\YCzU pZd;ߏp BVdr!^W$M/aϾkWI/W˗JTα-1V6Q gѴqmky;:)VԔ.y7Vf;g›ep9[si?bpv^-77}d 6a͹.:"_!m5v5X_e[5E[Y,c1U2jTr+hxq=MtSj;Z\뼑mod]Gr?Fc_=JE޸:GվXRI7Tg]9Nǿӷo}6鿾OQ;}7߽Uoqa8j#=EGpzU UX߼jaen:|z%-eĴB;b Of_÷Iw4ߨ6iͪZ ,GL ߼V*KTPb7 QP5mbVă$#/' AgOx>"~D)8Z"wȹR`++hQ{8\8!ybqTBK![d4K4FV:ԁġy>pa{ΗsxsVz-]w~ѽ|P ~b[ !Ar:H]$J !sebI(Pz0K(Ҷ^$nKΒ{K%E@ts4d߹~e:k֔z.hCL$]T7P5D<%*D*DEi׊CЛubbu{497aS׺:1VT|:bj 8*է+9^ۆ? G-%ӾΆp`CTZmyc7%wUFo- pPKPfVKi6A*?HCTg"JQfH]fUtRR=JFɑ{>6E31B%A~Ih29/[%u}22K4(҇$dau#(pg |=M˧_7a~O7QTlBx<ޤDcPEGJM w".̻#6Kݩ?nZSD\.t!q^=gŮh:2_7\{o2ՁV,ּK\c%"Hǜ'L,/l* w ǫτW0T5ݭѰ@0ɽS1̙l$<Ҝtalglym/ƣ_Ȗ/ܛW-vD.}(lz^*sם't@I%ȓBqTJ*;hM.ȉ3/IT`Q:䴳FFCF?ڱ5ɡ6g:EKbLxQK 1wFg}|<ޟ*=[tJܠW3is:,pRolgG#bJ%pK B@1j}=c=}yHYp{T%PC;Y᜷1r{NaPB+ 3]Lʜew0${Ͷ*!3AL%>4o/Ρ b~^m &3^;m} Y=KˇQpNt/S Z%$q 4,I"e0IotRňU4ABM.x뜡DB0*gZ{8_-j^cl6_6zJR6s{JlrHiqdNWwst+2$6;UMjrE[iA+(Ff]kٸRMIDC$ljZė[.)së\:A|2Mӓ|z/zu-ϧgSV~|W7qG5~7,d#ZGY.5ĤRmȎn_>@O?\ cN~Id,1e&qLbey~므ΞK_ >TBoQu»xSkt7eIX eucBуӲ98X*N?t<>eʺgE 4H+=_,:\v=>0b:{<-~>د3Lɴ~)'z\ľz̼rj&9HtMY[^ztylk7BT~u,rZs=dsmP?o=yuZ߰"4W_v7sw̗m{m_ag0'iAI󑶫ur(kkrw],B!֩#e給Λ3<=X4='Zmhjcq”ꊋARu Y\j Vמ3e  K(whUQD(mL%)+Q&jBidtXm' Ur ys=EzuԶuuf*͙՜P[宁w]_1]HXN`"$]p v7>EbQ?/b>> 3SY}\O_}C]m Znl.9 1 u ͿdśEJ2o 5_*1س#]ˢWs᲋dž8E#8]ss^o&?,Qb.NOV7Xu?s/f'sГɷ?|?Yb[Y+?dAWX */k=占b/}ﮗB okR*Ã.vVrikP9Լ\)HMk`kӥT#k w'c"Oc]nG [8}‰ҖoG7aLvmv\"&k5Ve.-LJu019A:T!mI!G_va3y-~g$h#.O3"$`XXzʻx#R#+lh4tph3ZztsJb9t(5ҕB1%+Xђwbu,a7"b^na4ztq4dv+kXtAs+/ߘ$ w/6jc\m4実usۍE )7BO odvz(ZКGsd'>P5:>ZK8O׳t2n/N+bKmrzR:q![>=Ue0\;c,Y"ԝΎs~zTWuMyZ&}{}YŔ"_KҕZH԰_w2x9& -XojA__)=tV:NfY¡gwѳu/Vɛ(*:K&)R&\e ˏCdɡ8#sJժ:ҢtmWԠ(G fD=Y X29Fk~t9_O&zJH`nS?ֻ/:w ] mxYwBIb&M;Е:}^JK^Uj,thwb3gt|JIOdGDW 8)BWV=]1Jt J+iØ;=b_t> +F9)2rLcW h$bnDtJÕ~,thޫ+FI@Wϐ,^GDWx5bnQbU"t4r+Kf,thmwbЕVjh4tp ]1Z?<|tDL'qHirV+`C/ݡ^i/m5NV1_/bSspx5"VG70Ld?(7^9膱eWOOWwlzTnp OP_t@W@WzIڑ]a4tp ]1Ƕ4tgHW쵱#+&GCW WX>|ܷG"t)('FDWS/!~'$;@Wϐh bf4 zyn(>3+c!3"bA 1br ztIy?&Ɏ׎&d~Aܻt8t']}Gz6YvX),ʼ6=:ZDig<[_?(ӛ}wTaj(S_-,xoE(Ϳ_~^jto+wt+xQD>X ?;Tn Ʋ.$.KU[ i,u+<=8pwŝVA{ɍ\z}u}{.;x|MܓEm&LqS o^$k {=ng{F\aF{4ˀg58ms=>+6}B|9joUZK4$m dHU&meY2j!yo>ϗ8ҰOլw/>-u:tMOJ}U-,D͖@շ䀛h*߾I6$MPd16^%RL$}բ$PƬEh+QŔEQHjN}}'PcK4*igQ6B6Q`"yۚbI$ ՒQHh5I%ZWkPkA/T2:#H*QkR"cK1L.);MujjȖZ3&u),NPR: JU|a-l683)e%lFZ+ $eʢrQXW0f צj,CRBTC-h4h"4) kR!TT2x6Uާ\*ciͻCs9lM~%|ml \B{[Z]FLIq\VYxc*lY+O`ɐOȶjv^Ob6lϛ${UNȪs#s"( z[4hւZ)*!wnkuQ2]a4n )a;Qƪ)L>~D[+ҲChRK\0+l`5 j%&}I)OZjKU&8l*"Qh,IZwY{E/3ڐhVbPWM53,HX H4N[V BBtX%RJEt)I6$2(ːVAl 0.K-OnEAE E'+9+CۆuXqx(vjJ"Z T/g-ܔɠ.M̍ Ki÷4K֝+.4-!-CHL%eOV>0ayCA8ڬE4,$x9PV}ƕ]b-PP]-AEX\%؆vo70WKƊHT`6H*\dk=J  y/AQ* e JSSNPV $_ȱ *" C@z[Ek(PB]A ߁IBlTzCkNȸB#MABX=] @ mF6jd%L֔AςGN >TKQv`2Ur9pȤ`Pg<"D1 R r!2Ф~ΖZ *ro*>#MEs&#%Bn7Y LN[yX)R6NRAPSJHHҦϮlB(vNA2ZêQ. RD˚dK`OA] D:-dJ]ZH8owaXs6#kE~,|:oma!<[g-fg<~٣[]b]"5 N3IJ$B &}ofllݘG:xXgS^Tr*hږ ;[.0As^B4Z$t>FjAuPaLJm8@C^J.G[ g20 )HyE nqHh`B'؅!)#DL+ȫ2 䓢Za00XrT$/> ͗Td2jnuG͐ B@8nTBNU~D}%*U Ul+ɒ3!/`$S``U0Mvn||vzt+|wq2W)KbfK*x6B@Fx0a= RbyT!}mƶ_vױ! x/hb`} ɉ*K^KJ-3qXkN43"{Ik*#DSF7kRE(In u5mhZXCdհWEmnޫϨ\hmjhNI8.2#Sm`̱K,F+IIAqY(M.?{"vDDh$-`yW7̹7,05́g;kcm8}*jxknRM[2CךEuf5*)8Y Fh*U)XQVp-&g6dIρ$DIM:(0SN mhi̝gWdPw`QV2d0-сR9 =UuӍ261{ ѩ`*mӿ;BAځZcE:ZC,*e碉y`ᣡS@lg=@j~5} 'b7tJp/1foRP_nxМmvD[NQ.eH B`^# Uh@%1Jkp'<`Ո;`X66QOX1 "y>5\!N.\6[94N:a #v*eBE ҈J0jAG=kupU le0?fD 46S5k$89(פF8/ʔ?j( o:i P&q%j@6(5,Ca^;XJpdBhC3`h=pee]5|J7a855zJB$DzlCSF'[g~^pi0iW )QFӥ`eѱf4+MjQLjSB,Ab@o$\5HX;`gt0j]l*z/n=O[{2Z6RATPqD).{/_n|?Ⱥv֚3g+eJ7\*X}J/{U#޳ܥoc]cˎ_s}1қty.hP RM6@K24Ǔ)RN&3.)pJ]g'MJJd3zм<{o^jnK(y{-hk0~j dNhPCSwzm_0%\N`).peg@:xuA )P dwʑ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H tJ Qf廤e;xw@}g@A8x%P AJcTI@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUHz QQR GJcTi#%%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rx@p}w@eu@@)<)Q d@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJQZN٭V~ ;&}8Yp*XwKevFT֘ R[.p飦]uv]_\NW%,EtNb٩uo < Sʚ0l{$pb &F}&{{鷽t(?/+y>wnJRiw \.h?t.(">Fv;fe6{oFkmj\&NF0|)*j>q7P蓓r^% ›x2Z K^ēF yYjnӵRMTWo/ ̣Vs>#SUas#˨i!t^[Nax>-fc1оU_fVI)Jo*RSehRJZv-{Yn***sT*2_պU#ZA6RV"w'ykj,UjUEO6C ۯ0񼯈+MA~⩠4"?LD9<Ǟ' WKWՌǡPJ~Xtv+Etuߦ2;DW+^aVCR)#+CtU*pmg h;x猪Jznl ]u Z ʍxJy;]Nt \oBWkX{2]+(vCiGn(kw+Mtuߦ缼CtVUk\W hݗ NWdDWGHWs;DW]3/m Rp#+ɝJW;DWnw誠Jr81ҕpM63tUZ*h@!:BCtUxWʃ J%S ٝWUA޻J9xJCtC+]֫C+c+'6]+6;tUj*hE%t4~te8'zL/2YKv(zCN3O晝 '_Т( j6bu54 򁿆gSv- >Jlh&&/oe[ .jnCˍ3%:dch aFI'K5f {? ~nh9t?LiItΧj'_np#nh- ?0ݏف }+n]]v Z'3Ntut;Fw7WʮUAUAi%ҕTFk!`dg坡6RwutB: &9%nOpezm>ųqWEoGպ,X>U\~}#.ծ"UuPofKэA6eP&(Yzyd >̝Ig-C-Tj-p5([`~T]IU;b쫓^z /T2J˙Bf^]gc2ߖ)y~ڋX.R8> .o}ȹ+ 1/[~ս2$3u m-`t!Xʢ┬ZJMʢ}3I%!>2b]ۜFa6>dן &s.WaK)#Ny_Fn_nzFc~;.O%M,~Jc/  C j~Sd*5]F)/vK^{[ *׋Dos6NELĵq&*.^&i &%Su29>o:.Tq8 h}#Mš,ղ:aUu\84Z-zDH]zUF|]ckP07y̆NuW">]Ry!>VNP3ŲȂ #R"(M(_F.)X@HukmInwsRĂJu=wWחQXgېGbRAʾЛc@>J\mP4MV(ow~7@Ks(BbغiB{7Wϫ?V3 ;?l*~x i> xݛBޛ<m'#s3& eP 720齚ܜjXi2[I xjrI^ۖ(] ҞJuib3.8boʱ\0zt2 ^桯:ÊD̼̺iqޣB|!:/VGfzU;?1bp/}*D%Ez-g˫K?i_>(0v;?7M/n:g\Dׂ f8xs1[u1AS4+D>z"V;̣'m9_ɜtN۲۳|k61Ѹ(`2Gˊ\n~'s\ܖ붲 ݺLr]e)k_׭c65ꌟ^)Mk5D lgF~ˬSwԋ CF[}FmA\muٻwmmHy[tKf`18 2gY$cmdIG8)."S4%EUYUW*0?}v,kD>Yae2rޟN@MKpL΀,W/lp6-H~~Xs^m=٢Hbg6mn[^4?76͵eQ>#uwz3sLߑAIL9>?tsD޴#DL؞tlzIIzzjhʷL{Mn:r}m-3K:eZ^Aᤞ,ུm"%hsqϳ ɀa j2"`QP6e?C(|moo7i 5!bc& {3x{21hCmwz]~n><w"#)6ή.,< W"6If 6#.`7 l.zTƬi@䮧//П Vl֠EN[hrz-H2oJ\*"oJH{EEJU)UJl pÇ\m2[RV b6i۔RL[r*vdl-vR)?Rg?i\i_P48-c3/CHUV:9΁T6rz.;êʹ,5e*X`hʀ*3#Q{9('8]R3M_Дg.@(TjY2sw}o=WjT-vxg7+e2)BDcd K>HQ,њ[TdS*ʻ Aw -MAھo9Jћa قop+c,\,SXYY=h^ҤPT`5(t# xhAt)uyb5Ҭ^(^Ys aJwrcf:>]U`$BJQ:"Jsΐ5B%%B00L s=Bii#}fJ+s|+,Õ+͔xA4o9Z:lwy%0:ib3Oß<\}~w$ka4\4Qfc9txMV6B|Қ4cgj:t,R Kb&c!IdVXalCWϾm6ƻܗ`&:N*#'wk#C,f%7EIbVdT6x`xOO;.b٢9yn $32_YsXwǓhwuX]{sSE;8E%YI q8o4RUY{=s*5Mn@rl,pnJ#ǎ=vc VbR3%/9J;) !}@1CVoE$.J> YfBXtŴ)ue/^w'-mMAW s7/tj )n&fpQ6L cd*x3K˓d/sd5 tZqSC`L )0Xց.9mUxiR"ЪU.ч8@"\}IT&]^Vi *#((IjC_hXTeJ!21 y_w@Gv@w7B1ޯ8G㇏޵.{t m -ܤ_BUzAnic?s(M2$ !ebGAf6W>Eї 9%%?z i[L<{d呞b>}>ޯ-H@K*M#pg.1eObI'{7ZbRBb: X dž=9lC~ dC6~=ml#kk6^v6؞?usjUlжzohI ?hh;@71jH{m+{\;DW&ȡה.-^Ѧ%Umی}5.~n>0-twCw_nws8=0l;Y7ߍx-yoyu3 ,ѬϿz`fד:N=y= WKs[=(`GE {^2w]%6{֡]tWTmbUaax1@Zklg5U 07VJhBFŝ'e⸼XI3vXIԏ1;@ev(F+r`.J"$'f^8|K~NC&}@mYm[icpkr9Em%M"2gϵ1^ГÓMH E[n+Ե80lށ2S}ԎIѯBUf\ge񖎻wUj op"op"-K?HvAw8s8Rq+YAW 8! m|rSj k]۹kp3/kEmWޓy;Ww``հܞإ8Wb-_M\f,jcKО|bZ˘B!I+0>H186X-_9&Gg]".ڂW8V֧4kA)Ĥt|MQͼc.շ);;]w 7v+!-_M;sЅ؀5J">_Tٿl5axUFikj>)A2)Qd!;+ݭR@ F؄:YAfaҶn$3Q{Am2lp$F$p F򒚍0VpЋI**szV^(~N;AؐO9im䵯+wUST.zY蘬Hjuv.hDk1rHC\*s5NsinthpgWQhNQEm)̖dN.GGEsYäS`K8}jꞞY)c{)qAhDSԈ"+ fZ$wUH=k;Pv>[t%t0J]d$9I "c@ aNIGH@Lr5U%0GW^ |,LbA'+ϴsvi.(TLUA- :? ?~9b"H5[ *F}+5)橴;+Ggt T3U 44;~\5eȎl;nv ߴk5޳ a|TDDfcdl2rxEoihȍW: ZkhiR9ZgՋlwԔZGzKQjKToyyV\3% z% șV5sYEtU\N= ]vT[!jg %g}Ҵ}vlHeBBl:&R+K'v|^)%E}%\͹43G.lJIra嵺f'Sg1SbdFќ{GP,"()CDʣ{4(#bOfn1%uq3#Nk 21]9mcU5-v[x!mu>,Idh}[tJL2s:%zHK!yCQ -z.}2D!, BDF6)EH&-Jrki}GC{کX"e&;{6"f>^-Jf+|O cW)){T&U_EjI;_jT% ?1G Azju#&ѝAe:Tt#fRy`Y @m8.w kN#"%mɲtb624)K$EI!9t*',&Κo]WBvv )~e.7pJ;  8 IDʙj5jFuPiИb)ᠥT+lh_D[FrT$LgIM¡h%7FP&8қY#]t\"-.uHDwVI[vOښ_oWë J2ޛv0;UΫ5 V_);L[z><~(Iaf8:z$bD3)r!`TzR(I@Fe4=9w1!qz7H3BR@B*Y#AW)4c[,dXW,K:3㖺3Ӄ6!ۉo^0?>*}8\pF3q47IGㄳ*L40VJElnx8 كK9YMW)4Cw7oGML(J>&)yh\̮v1ya=j v%OAkm?$dBbHΰ:8e'@$*`3 t R^1 HbhH p!@Sćy0?Nrf/я!@69˧9&N8ϣNKF.#Os8&.'_q|n _igI)"vE-"%wk;8x7#NG @ /2 1"5$,9u9Œg:ay5KcwRРs ~~UMMbӋ4G@;lfGS\]) o)T:?C~ABi̗z<r5ѶLe`,T"\.,}hhRfsWe9z ?1CLq"H-MdBTjaZKRntx;Q6& 8:Ȃ/k&N rT*k#zވW"&G1q֌I;.֋Rз ŹkٍlN>^^[:xY9,Ǽʶ%W|A: LS4Q  @"2 *@!$0ԋD eUb=".wq 0QD  (4 !#Ek!B;NjZ+1@#8'OIqr$0;GRD%<+ Y GCb) gk˵%؁gLvh`C oZgxS§&/xb^1E `)FD/m%oc m,XQ"i=, ==4/%Vc/qlD)n8pt Dnd{)P.> 'th8zQGTy\3L*2CyCyH9EMǷYC*a]_ 톾^&w~>>F3"]O}i %qU|y6lN u0~iFb1`l~\_\}l۩#t=(f`@V|bXZ;hso\7x4hFn:^'n&WPLQ!Ƴ'+kaeK&mO!$rkZ2ѓE܈qSSWtRh&@@B :߸#.Z)cA^@Nzï3EJơ. ;U]@WWH(J 3OTA ctLEM_mi918b UDm,*r9㊜e<7RfrѝI[Ҟ^jv:ՒE>hb$2t*HhCR3/HL34ղbD9ϥA6)ĈFNpB \F"V2炶4KB`Y3En_VqWOFc}vX+v[縹ZEb%p@Y!XIb4EIe}8錡j ^9Kh )h+%D)gK0!`^LNg@&CKu-\LPF:SK2U"g^III0XCͻ}] A~F$* H-HNc$UXD%K<<LHTz.xUmb 6_6z{غwgVlsϳ1\P,S4kbiJL+f)+~K 2aP`NrC,-], ,pw(OH> U가 N'!y" 'o}5MϱhHNN(}m !(х_oz/Yn&0 noxMz^,4lAg*@V> Ѿ{l?4KQA͸L>_^ߚ3jc/NÛ#H eS2;A``(xCx߽ ,v7d)yË zͩׯ+x Of+p=/\m" ܜͤەj۩J \ \ei:\e)+: l8Jz(pժpofL+Sy0peP*Km*K)zg+ivV5!uKOZZࣜw殶(Lr;?>N?uȎkzn5YhHe>UnfO#j_G~1\XuOg'8,~9ڵ,"\4992_oϣ|ȻKuz)`8DZu\~HمXCQP^>n\{ k_WMyח})fWw-' 7w<6r};YR:~F)  h|Yo^6G͠[^h^$W=)5_Ew3 cQRB˧T Yrt~żМVHoE-iMevS޵"[؞l%K6DK`q΅w]]6h#9nʷSsV>nKE'vrp^쒕bKqbs!&eJ,sLz7Jp *8z8:w\LԈN͔*Bۻ0pͭ-&%{ *ԬxUPUt5/m"U,T-H3!t<ͬ[-Y},[%Ej$Qf:\#uԡ"cM<+g;k"&OF+!2pm^-7}"ysrmnFEu/SufRMe40֎,i%y'~ R%۔d5L`ly(KIn]:ik*@=ƘVT: 8]cBRPrx0Sv)m)=^vѲ`TPݎר ӼK>zLSJ-ͱ_GA•d\%,e`q?8d=|Dœ&Dмkr]GIq}t_<,escI<1|d2̚Sa~)[$Jcކߴ<-}Ckkc?Qats$Va+E--#Kr[Ѵskd6:/." |0W=m-n{ڞӦ7hasxd4^"TR/k'Θ -?Yh\啐Ȩ9e#йׁyԞלּ  Ē1#61k˹0cp2dBIP6-f?ο.sͪ[˵xʲD֖dm@=.Ogpu1an+[q\{vLHKE7 ,'7ԧ7KkN\BҪ>gWhm91w;bkKysjF2ң{0dh9ZE*+%b+/7^~o*sK.(&.82D&4roA@9Ưk U[k4O* ƽ Pݖciz?k[3_m|*ön3Yi!렡nTmT`^Qj}jT%es\?zjۗuBDgUbĝcrJ3 8.<2DМrOD%-hRj٧rtb>rY3KgD$ *gfTC.8XJR*Z\?$dBbHްv>wױ7c؝[duy-75q?U0 )-8V>!@0&i3,=SVh60V DLy+z)<z鄞y^HkqAJV?H_"<URᎣaeuKFG"ȓ o%X!p=3r|^sY$)J'VR`~B1U޽ژyl\]õ5A]%u=iszzqV:0y'a^;U{+ Ex?\hΎ6©Njp^wW%SE,F-Cb5QnYkⷦ.mdwEoOn(T:y+\6_,3C|,OQձ s_,}%*#_"~9x7rDm"ى =U##=7^2 K0'Ezu8Ŏ|X{v_7m?w?wjeyQ0cetЕ LTWIPRdju]lhu"Lo&<ۿ\/EƜͯvg^&g2}7wj|sOS>D}Y{ 5hd .c}]C=UʓxqC~xC^ ~0~j)9=_pt~M.,P|?nZ=n[X[Lh{6jXw_ Vb!ldwjɟjaYl]<{xʛ=Gm_TO:c{ըb:` M6о.!Ȑ $aư> {Z(*@dg<` AK& ΐ79#Es@Jp㣕2Ƽ݃^~v2I>JƧ,/oPC3T-?M\dCpq(89QAdj\M?N2;ql=;1ǥcn#D͸P M%Е!VF-v|rŒq$gz`.j'6AXcdĠD0r"'P{i2Z Ac*ye"hE, KRx6Bra٣8b)C90Fʚ)+ zyGyQ|CF*Y@wmI vQ`s66)q%Z>";W=×(r$ Q {U]Sd]'3`4uۘ䶮jcAQp>i&ST9# Z/B@r2:,ׄ⧪)Kq&EFɜ1K J¥, D'Rȉ5a9k5嬒?1WèJKo($9ldL6LҥN JP@BRXCRHBRm IِW̵3J 4Fj$*In0'k5Ŝ B \7 `OWIo,Ce2&z\NG4H6%cƩ0NX vauќyrB'{K*ZCzq %z pܰՅ79yIR+aI i.9%N)=30-%N{0Q H M|CrBW!Y5J;Ou>v EZ4<'.\L%kn&Έ]Tn,W]e;e3OQ --GˉSGY' Q* QޓNZ$c]79gwԹ_(qw{j5Q*+5JǙ-%%Lῢa3?Fq<^ Q8<;hw{Zkzt\;Qk)8a#&m zfwv4g$ں$YbY0^GFJ!#):cO[f&A:qTJ QSE|gyQ|u ]~sW<]"SɃ ޤP7m.XHm>v̓#A ?+weR9<΋y\?Uc}~.ub^盽:l3f~Ύ>N.E?9^ _#X''?w2ۑs}yvxZj? ؊JCzÙ2I8RG)\+9k@a.ǜ ]ĭE[{ᵻ$tzuܵ:;^|7뾎h@_醩[PT}MIR?^|rⲺOQ|qi oQPxyw΃:هzyXL(DIp{,}kCaDnM)B0%(CHK)\d=tw) mQS5%Y,ͅč6AoS.H-Q&IOw$PTT hT9j ]hmssmE B`k~:MqJyК*(B)t)Z;dn@ضoFeL|2'x4SUB,$U9P/ҘPQr%@d >q:wOb?( [XasI ӜirN\[GDͪjܑb6'@_@f5(yt{ئXHho_@9gXXmX'H &(hC4Y -!M6Po7&o#(kfaLD9ӐS{ ßψ?py] LŤp5I4uU;MYFǿfèTv5̮T.*"k݅CΞ{jfRSOp8E (Uqŵ9M.iZQ_yW]~2>~p Fh\/~:kbm$j'YONmY5 +GqHaQt01QyN>-h1f7orQ$WjQ>3 dC=JE>:'t7XPI{D~X__.|O??~;cᎏzt W`=n H`ۓZЦfCs -tW&b#2h{~VɗިΎեibDAWVPWlL7*U?ޤU"T*D# 6l^[^:xD[Rm$ enis@W䑂-BX$υlr4wvs;f*ml{8\j ,TΉ}ܚ ^ꨭ!u9iUMxɼηV<:" _&Pɒ,BϞq{~4O!q\ w )bnzY5#0s#\t"j:o,83_Kl{R |O OQ`܋/J5I+7,srX$|Z:a'oK|z/h<3F*橏ALrP9*>oGHFxJQ> %p큩.lTmL&ʌdXG>L4k5Cg}y9|?r}JiW-NeZkϻ𡴓i}RUu'+Q2Ɉ2)GI ȞPSRq I0as m$RM@ bhhVhښK3J1&*ytyXY߀qZu:+. rMQ٘eR17nYVj_wmݗj2yeC RKJIi:LHr΄-2!y+LH!ώzJkRF\1͔%h9Q(ZExf#ܨļIR$LZ{-QDw!kqq,|+D$O yNr P~s#ec/*sXvyuz&`,^0 2 UMX#e2LLѻR TU+,c4fD툺ku5bWR(kT2ީWt[%HB~.3SV.<~a `AġIף`\p5>wDI%K!Uq6t筙{u}| qNGutA?n8E {[MFg 2hpgRʚK-@ Ly#&]TESD98*;u^{+FgKmx Ԫ7 df; *kq] ؐŦڊe/.X#LDZMpfν"E2RH%xbJ')y`NE[)ziev%R\tsdg,LR+XOJYrђ3wH]!RQW\wE]!j)*S SWߌkzO qQ qԲN8*hҏPWSWzj$Z;`-ΨLZXU:u3=C+jgU&W]QWH-PvuSWP]RW@!zgU&1*St5+I՟JV*.p/V72DR~1ZT9?GN8Sctde#-S}n >nynР*b{RX>gI*`aV$E5*fOě#`[iZ:ޢI~Fiz4KU}4\tw1 H.B_1c|@2{ۏUËHCRT|ƀ-3h. Y:lϊi0{ddSsMgjS+tN9[y͔$vO2w{n3}]'oپ-jپ{+njپY^ ^_}Ӳ~I͚WKW=`70e4L1:&1 ;J]!NWi!ΧQsƧ;N:QKs2b&^19Gx`քlBN ԊCH(%*vhҷ~VjrԴ$=wĊ7'q8 GDA4)*uF;ۻydH^wʚ^$^(gt1`y!>HR1)DhEB+*{/^ϩ'X 68͙Dpw޾NȥȂBWʊtLB)V#J0hFFByޡPh}p]mbWIOgwylcC\ݨx'K]m$%}rRQm??MOCNO2I+=bua3.e_K iÇ錌efqlV [zOē1Z&j ̺}ҭdz XG4zƴ"ؐ6daZKO5R__AgѐomW2IZqM+MZQ=-ξ^kMx7^|imvM>Jl$$jfmOaz4bkf֑>M0{;Y"b[EO<ŲGÇ僞>9vZ {Mvuiw$>Omy; b;ڈ еꐞC qG1ooD@LJ&=֖^_Ō+=L-ڸ6=w]~ngu4u2m>NG )Ñ}6N|3Uu+0zg+Zb'ym9d)!;A*o6^Oo~TZIٍc554 K%RI J N K aˁ4#}Y&GSHkr {"0m|U8Wb1lF6@{\"d}I;mGdi1X"NQM A(e{6c(*Ho}ܒ﫯`W0[` @8u8|m iI|X,MXd^Rש`s$Ƒ=n8F% =3yL3)qLJ<1[B#B%kQJ:Y!$n&ﷳ/YfB <ť4펞6e1TMCuTMCwq*aZU35]hGR \t/r/K}xs Ep6+aQz^b:~JLibe鐷KVp )aT\FJ%.2C6shl26 JN꘣AeQ&0L)&ƨuJ13O֗55I\MO| uH/~ F0}|eZǷZt wtM_BUz64|zwe0C*s@&fZ ƔE]ێ昉ތmU~]sߋ_4}X\%?ْhg~>=lF44$aR*3>&.s<]2_Jn:RtSvH"2Y^zM;9iIjg4Кݗ.WHF 0NACg7tv{f~M\tuΞi3:YHx]_Bp3wC}݉u|̓A>أYuυaE&]hy/O=ZG470QVIպD7R⎚K}_Ә,S" ŷIy$D fAxBƺ%WqFA[q=yjw>JύmLz4ω)-@EDQɜ7!by;' ! ]T_Ey`_.HԚTut:&gia]-Tf)_ҧaZ{[~k{JeP7_R꜔Ӂ7|uMZDB}&*`}̯:B2jH 4 TNq»# ;L9pXߑx};>#ӑ|p!Ddd&k"m[49#yahB9jm 'kI1&rC=$5/a m:IIZ:8BeE6:̈́`i< }~v2&T N(XO~F"on[~}|8{F7׋}yڙjڃ3«2srfEH9Iic,Fk0sHZA9X78K'Ȋ8YrA'Ai*b1(0$t9JhsB&#U2V~XT$X[(*B7I';6>La5ußЄp|?͉-6,I<`pZs ᄊ1Nؕ2t1\99U:=B &cR)QhKl&nǜBɐ2j>5;GF1jWcleo{?zU.eW*G'Z"p2.k#L]K+S&dhɒ$hjF!bȀ40P't췇-_x68|kUez[]剡9_qWkLH+ cvs҇z9MzQRPKkPn:[zyusaoeZe B٬g0L - !(2|JU3^lsyh_ghw[T6% 3nѾno5xQ ͲK`t)HFR2H`:`K3ߥz6R<_(ձK_5}O_Y '>fOlS- +m,;mP\zt02lhdMaBNND۩Ӹw7l2`Q Qꤜk`V-z;fyۧ?|ϯhzQsO۟m3˻=/V7ϐA0+k dRI>R-ͥ?M'IW/_d~bv{6⳾Y<9LZogϴt!Z^я,WO&+߾$-~SH)Nj9vO_)?7R$%p:B_<;D@Z{a]?3^"U|@#2?|[(\"QtF|oM DL$+MHWb||_MJ.M&Mfx9:W̷7&덹XS?{dr-"pi{+ӽkoڏ ԅ a`fhgJ|K+a|O \hNgHƯ U-vn픷#٥4])0쁽βn8h- A ڈ }#LԴ#E2> AQ?R]”@My ÌU 0+CfS;Y2lj^ ) @<.rrtPQS4 H[V6 ٤.!;rdp@zI`,',XP7[%}/lޮ]yr% wxP PHy+jxlբ:]vgqĴ]A λ*K'% {o糤=g Nۚ3Lmjb2ٺYșUIy=OIO;2" W>ztjxL(Ekp3K Ȁ !xmReP2q$RZVvCbn-|OHNc lr1yC}6؛d[L1S8jSE&9 %$\ 3֖v}8m`QxN%+%L` E*oSڨIض;ʵgWg}ěAY,诫輢Z\G$XX(k, \:5L )VN ˃܇JѾ]L╦$ ųuvG0ڒEch㉌R˸ThdQڦɵ̿V?+1Lٚԯ{ru?z8nn ]p !0H9k'-ެ!*VTy^y۵d5gC5+$YBqS9ѢasԉeH M"S &jk 1i4Rf  aDǹ O'9vl9KrZ'e͕!GdzF(PYE-.e&%+),("Am,@xF2hnk۹}ig(WGObq,٘veh 5+pp12GM/<+^!Fgvʚ apR1,0[ljdɖ58[YWl\yDԍ%q9EGxԂXBAXEfuĊ j92@*k_AނYlYF̂+#8_PKqi(TL`jgjx?'oBJR1!HRkFEp)yN:"ĵ3*7G+ѭdk/`noŘ"٪8 `a<7E.9`Ak]@^D]LsEpg]QDd_-+2oeڼ(;{lAYJb`DBUVR QFx)8 (4* 6G{2ƪ.~oMBٲvC ^ha~M3Rէg*JmHPʪ ݐl8&!k.Ѫ [JIA$EOWleK >qaS|֭I2ߑ=DJ (m]pƚKR9 %M)'Vxt6A -dd^T> <%:Hޢ)hl◯I(B]]JMyelj҆Ҡ# 9Q=2S/&_TnUn[Ա?GUzKS4f/6E a68fbduHdž _;9ڢ !H@kେgWT+N楅h4^eR9omTUРlб Y8 ?NS>6_tX[MˌC.sI&Yh:lVIiDi ]T2YND@xq,LzG',p1F_ݖUHgU,/>mY=~[Fqd9+xΓ9 -Bd k~#g4BXUhCA?;U=0݀p-n wn ዱܖS2neNW;6Q7FW;R=]Rwq;Еjߩ[zDWX}PNths]+B)@WHWB~c=BF}w^t(- tut%q }+BYPĮk+U綗H6bԤ똣*ZKvxn+{^ܶNd6?[Wxߓ.# {iK)<|\\ʹz.TdT`X ]YJ媓^`/?{0#̰5.[vT.!4WX% x+GsΗZ7ljef,*ϫ̡T *BAfAY7'Q9C{Y򭮰3 >V8btR}x7n:Pe7?}zh|n*/Um1tSnvl <&] יG]:#wF CtE]3\Qw\ ]ZT0ҕP@ ؈/tEhU+BJj&L+im"m ޝ ]eL YWQUV\+\4Е`@+k ]Z!NWRp\+BkzCWu>vECg ^%2A!@p&hu+Bi@WHWpd?@öj"+B@WIWh?{F08`pŗ^.3s`LX`7Ag,)zI߯ؒ-[em˙NA٬#YS΃;T]. (+Nn֜rbi-`mɥӀ4DڤhUP+*b-\ԝ@){[/jX\QعﯫyݿW-n2/G+4hy8)_4b{|1*?4(ߦzkg >8#2UjxobFG~[gbv󪆽\|?Elt{rVVTNE..iezWީս?V%zHIXQ$!RHNlԁx,$H%U*~J&,!^~R\@u-*H\@D 9j438 gR`hrT@ dZ6>e{#[hLit}8VfጾC#2ggW.F#tvWV/=/\w}h /_ӋmEnMm]qa=7!q^|Zfѣ6]|ޯ;fe}v2()tŌ}"6F/E~;кHާ EToRaQ$:m`!zD8!p“6߂"\}ٯb1h7UDm=+ͱvGQ1uV0JB8Y+KYqy9ͱ|>Lu gFa4k@,4u~~$9__–H8\ -;%6߱ gF3Fπ+ SP;i~S&w>11|HC .RԢD~IvctbBGX?}蚒5Kp)qcj:> ©!nD"{A6QgS1 !rΒQ<RA%1q]6 &eF2 ,Z#Ձ 7&bss;fSOUsԬ1_5E~v4EC>T5{z֥Rw)]JO[5ygU#ȩcSzƍCW*A!2rMfK"s2BY#$S68_LR- T!`d=ܘ &p:ߢ50qQsTeK?|}@(8nʇj~8Z;wIvt=钂4E,?ת~leA)D&o4*rRUd?|KN7~Y6q-TEGY5TS-L7Z5 TH-=P1I JXc9C -uHDI=RPHcG9Q&#8N@= TJ@xK<b1PY 7V*RÊt̄"I1D͛pb=:b>o&xiF102lPcB<& Y)Ucg*fj]W?h-ðb`,Fy)> )N#ZΟjY!?a6>Yf뻛S ;KVxLs]uAOao6t)]5m?.Wr+(̄v6ڴz|mjo]5pHղV:z%k;꽩>%ʷ ίOF瀍뵵r#/\yMdŃ NXbW~UD)ץPʟ҄h]keP7HhN zYJf#sqoa`ߏ# AeKBȜ,<d؁hB\Xdp ^kfE4 8u?*}sڦRǏIy}RYr0wREWkS6owuAkm:P~*C4Oq\~42Rg-ךROmZ ;܅ F8YNy Cƀ׿~2Nja(8йQge~<Ēu;ͷK;^'!>}a4`[YmSq^qĖNl/`111ɣh`ӧac׸d5jqnэ#Җ>4E_:ٿT Dl dM"( m%)K10-{ X9 XVN(`#_KTgH<1Iђ9G%> ^Sm}#NM7 '[DCPżS|Ñ\b`(d$90GΤHPH.`ªA(l|ZY#`Xj#jkCg:EKbLx-*U6soY߀Oyڵ dOv%6&8`he̒,iRM1]Ͻx[# PH*g9Q6 )-PzOD!SHvr'A!>OKP[4i,ADfȊ"F x Ijd Tk8.P፧9E y(X yj6&teO rY"Xkϵ]wc/ݰ(އlr0 M\!ë*U<^_^uΐUb1Z- LPRbraf46y(@s(+!v dRq@;K]$,!!:x},E}L22b$6Ю|k!CZ |!<,[[\]sV[y*"εan9nHM4Qd.8&2l S4 4LBjc%݈Hh[ HF*#vz'4UMNGsW506"On<P-6ꭧ|p6pîpĢζEZA"8 r7~>uv$4dH^8F{\P Mo!U݆uhXwb$@#Qy9<$*m8轒RA|,p͝,po6#d 빱1$SIIqSG'?LIExjzDc>5ذH`wRJ?+uKVxPW7G 1 bA7VB?wW=WNZ$+{7)Aֹ\٣?}mZu؇p%"$m 0H]xgRņYˆ&aG<N"5jtqذ⿜D!]Α$QA7ʆUȱHL4 P )K$)P5rL6̆T %'%V$cLZB%T"8! %%KKpKx[fz|kȏ'v^-J2P[M't:JwM{_;n84=|Sɬ KW.&ޕq$X}T_uuF%k-I٫ :!))4=Up ؆4>ӞOYkemgm=F罨wiz3ꖥ Et|}tJg,ZiAYt e~k7d}HEٙ/z;͵2YK-h:D t~ْq%rwG\*ZVTgِ|rQ{:edyϮ}l~8߃kv[3{u!> ̖^-zӒ!6I*xvڷվZ Iv |Sb:K'Wweɞ%dm8]D$mZ }3,^SY[#rq0z4|i9Ok&]Y*=>WF64uiW-'3_^?yz`+#WC:?0y'@ެ%]3" r ÐzE%k.~b]>{2g>!޿p8E#/~aG:pN[(ee}HօF*vArچ,?dXbpm{=za+ڃl6I`q`L9hqY8TtR)NJ'tV!gA;Q۴G&ge/#bjo ]ZpEmEڒ5Ixikr^9hGk2lڊi1Ok<)4k{}XAKR92I+,7Z)c}Pu~IQBDeL']HIT%)qs,:YPrկIPfh]ݞyY7KUlW5 vA{$UOJĎC0U ҄UQy~(E0R.X-d3#0$ɖ1 \sƤ ʈnj̞=+omm̒ J5"b6gRR\K\fXXOW8c[,P XS,<;/qO/ #10߾ ?=|Fl2:$qs!U5sBXu:+08MRlK:WNVfe(FBRkeeuYKt)`lb Y~|ոc[ԖQ[=h+Ϥ1.Iclʶ($CVd1 2#ٮ6+-t] 4U,$͐YA. Z$AHEǬ" $ Vj֨_6*7zl-2"€"xk O,:a#gfQpLpJGnT69qhaC3F"G1;?.` G,$Ml8׌D!-k?K=t·Oj\-.ʸ\pqDf#1p;iA&yw m Zpŝa5ue< b]7D?>R#~5˾;FHqλGM(g:$9Z>sˍǻnE~ v䩜{%D%ɘ-JsLZ8CvVw^ 9Rwnuo΄ĉxZZf e`dFEȠ5Ad9Rs" /Y<  e$xDNFEm%XL #'g3c`1H @:&ey,T Kd) Z x !"#p T#gZ1>nGz栿_yɇwz} vЈe5)*U /XN?"a"hYr]FR+B;06 sl<]zҐG9 R,8}L]̵\hi%J2 84^n䩂GF^TF$D.,+J67^TP2MfדK =z]MCQu'ew9 ^tk?+oY &h]Ңp+kbD:URG@P;oV=mIKGW0Qb8#< Ţ-Sp|>-"><=rmNΎKH zm7Ұoٶ#=j2x,]?P~>?ߦ 7Wl<9< 7強=9xz}Otl[(?ҿSuaOf?ca_rE |F/bȯ/b<9:XRk R. Kdz2u ̟}rNgW.-XKH#aIj&qfvJv4 ,=8xׅ.tj~/zM-ŚwD/9 g6d7Gǁ5HmѲlPZ/PC0ySN6KR)`pzU@p`ȇ4Z_~dϽRp]pjSRIԟ4dt?RF1Nݝ5O3(PrS)hŴ@]qEn2ey7{%};|?^? ;cTOV* $2SQ ȣdKSF>H1 .HT1T!1%7%Z8R%P6q_Qvzߩnꍪe{)$)D@ S/Lhks]Nykyp&$Sɴ6sBLnT HE euRML<6rZÔ"h$K*ɻ4.,P/^hC")#K?r&O$Zƥ"'a }Gi3q$zP%!iO7*%<l妼nM?4{q7l=3811-vnތ\Fϣy(6=œ y%]+N_gZ#t2\܋` ~9hU">+﫟~REQ WiV" eI|ȧ@{X+g&u/oE*|tM>at6 ^wotoE^.EX-[}#][lw=_j~nϻ;ښ/XѼ-=3;N`˟ _4{#{S\<7.H]z6 \ OO;-~p-]I+m Lcy:H#VHe{ '[Ia2X9R1(,F6Y XqkDN&uMEMBԨ4MR&hˆsx@m$jɪF- C{U7eǧrr6MN^YTЂ\2dk&Fb@+5r5ArI&vUԆ6>m2'cw_4H9tLUS,YFi16x ǤAdelS2Vk$C'+Y5r6V"|r#{rhȑ $m* I "c@*pMX Z."PS>{#?LyoI넎^r)LL;u)WbhfJFUfǃ4IAQ*&4\=9ǔi$.z4O#*H;rpTJv~xGDfcdEWd h ,hqjӢrZgmԔTz}j,·ۤϊkFD.YPDr20"*2+r&.#ƺT!R*^H@,0#:8$8i'U0DNwu=q$Y־#VͬF0/;OۣQ|ڨ1{g}OPm'P6l {NDEfk!KݙMPgZ WO֩_\nwּ F]5([bb)90wߒZ5܌ߑ(dQ;>m\R TbڃYI&;xВ뵯oKCUaMV N)ԛ+kDl{қIև\(Tƽ3#g*&:.{JEn+7v jQ|7sl}ZQr-`ҒBL5ېXڜgeƗb9C9_]/_/1wd~:9/ * lp-e@{u]*;Po}[e%Xi1tBW@+ĻNW%^ ]iDW0+cBW-]RuOW/l4MS }fWu)t5ڝҹ=]@rQxQt .pbZfp ˼Dc `b pK+ OW@Uďr?]߮CyÃ?Z ]^zWZ94r_zup~zyuhEocpcWAC69!PXJmχˣuC:.Iͧn־ NG*~8 W?H϶K,Zc/N͵]7ObelVEm\??^fQ7 6ѪWl9[xR[S=_k6Cvݞi6TrF6^`|m5٧ [KA*܌d>v*-ǩª \Ǿ99Wگ Owre/}?N8"~yGt\6.ֱu(EtJȐ +j.py1jWWsOW߄EDWp+KyjK+K’ Yh1t5,Z'NWe0{ztG$rD3R hӮPzjt噢3Y\kRʏ;PFtj\ݚ-hu@HW>ܞر "{`k5=K [ͦcfvy~T.V5OVӸEҾi&M^y~<| 9+9. q9k^k}vm@)%G7^^>3p 癿Z~&][-Jtء'QvAt5?n@\(}_{OW߄X\ hbj@jܫIW"5.跃hEv]^oBW ]͂ c\K]2=]@vC\ ].] ] u?P=]DrY] v9fpuhNWe]DbXu5>Cw^] q2*DDWa\s9sW@ujuK=]}t擡ҽ m*|8(\Gǣ`m =iMg鲼>~p!꿞i?n^5"kq{72WutAwz/OKL}lD9yQ0x>C9;ΩM" ʳNrto}tog9G !\}OON,7dݜ~nsuC\l5H͛_uXSŇ"/@$Q1ޏ薉<!<.jG͗登 G =z#O?CmlE[ˏl6+O]bsl5P 5YNLnQqJx)?|_폿]^Cb y~ @Nj&=Q1&Rw5j ={b첥d(Fq5\:G)+& ]HT8ÖO9W.U ݛRy1v:{\ dُ ŕZշn*D9G{WS7|&9):'j d8o-`B[%ɵ`0"6b4FIzsgvhѵAb6N bwkK,wk`*wd()ƎF1=F$BK}Hf cgvh춸\蔴h1-^rJx hB0kT޾ׯ6:d-11+y݁6lǝ( +pbʹc0&rihdT{}gB-&>#CG('@ hĎGts<;N& qȃcFgdɺ:}`E}jŇ$YUț:ֽYjJIIT1ڦK ;rbC%x:r20'HwSt-!v&B((|I^LBa--hk\.`^lTDH,8d\ml'vzMI'r$"HPS} TЇʦ׌O%8ꩋA$`w"Rk ],¨ўw_KQ#)I2U(_ L V]T0`gg,a{eؠA[wZ y9u<ա휴i,%4?k(QM[PyU벫+qA[]JWǚȭn$"גuV6 ]ȆfU(v^Wgmp^Z]'QYjXkٌL#*T4D@ %X_;wMBAiUBSO\X|.H1?'؎~ok*F•`,d1&4: @pPk@ouPG%Ce:TH8L%W 2+dWK*j3|Wh%W e CX' @C@H0( ""*f"]uN62|so͙=t,r}O?˻:YIև\(TFeȾ1 ]{ #%DA]O96xȋY}L! %:.x З9t &p,vH Hј S{X3Z%l-ѧGYJ2fhNJ@A "+P(vm!Y5M(#{N:|jyD` g0:Ǥ td!ni6Й$ fjR2ŌJL pyP*ZxwGrk~oUd`3!βnPB`v*Z,ԚϽ4Kct֞Ewip*L8nam@Zvc魚.K ioV HMྠ{$*XU;J@4Xq;-,F ݚ "q/ <JVO ]'hYeFc0pݐ0𐗨-!+9P.Oڪ+-LYUfRPzaH\bF =P`R[]` j+ 8(X;)|rcH&"Eɟu7(V1 /4B7| CpngR#ǯ+@+͇[d/b.[bQߤ] {9yZvEiηWu7荷9ccmӧR(Bpu}&]lPOFG+Qu:egTnXuͪ=mgvg+#%GF 筠kO rdA.KvѸ4#a$DK.ղ­&ղrQ& {,.|usÇWFQ?~3~\U6yɠ-65ꩈfϾUaNgLp^(FG&0{۳;_xo|@UsSe XM~V~iok,_MYV~+Л5:T1EotmŒݟ_dP=V6ȃw?/r`1Ev=}_\Mw uRgյ=|Un__6L_ }amci8r%AU=5zo,quSO7btу}zzqF#ҊHJ@ptt& j|b4Pi-'FCb舞[?8pN?8pN?8pN?8pN?8pN?86:FBN  'ʍ(I *g'P@Zj+ N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@H(9@03@H{@v de@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N8nlWR dKjח 7.rϊr{ j(@tKh2%PT%P-`\a gl . ~[WKl0F[/>vjqy^T/_㢀g-kZyvYilWn Nt* S\q&r-&:Pt1ġOG[+9<+)Zߣ&?M çݧyyz/%g6jy-ũL*U14ޚAAqwi~o^KhE[|1M]_,f k;~%X_\Ugr1]mACEע8/be 7>`>\T+!;A׹s'-xԷ\Fwϋuy1^Wzdw)tQj(ߝQQH݋Ng'šdUpNVݿUkpn8_ʼnUQy{U q뫺u[Tӕ?a㪚M[l&83X8ǪБ}+/ϳgJFS!f^3(V<(dWa}=@TʚdQDwz;!;BW-NI"n⾵F/TEnIPo$zF3LÙ /N.SC7ˡTd"Pb9r}^$+@W(/R56u\JW=ĕrJxKW X*OW(W*B֧+T"㪇΄Hi2]\ z:Pe`qe\ڞ\$+%\Z%cBF2z+c2B Pw UFø!p=!\\Cf2jJW3+Qdpr+T_jFd >u=Y}sU{ywܾ)6-FKrJ87\Lؑոcjwz]:*fŒI1AJ# VW>QyE X :ojIӨR+^b:(oC]n狊ˢl3^N'Wޔ~fMr]JG6geyx~&pe]9ǽ&]˼3E}y L+,mWGn>- _pwop'M+m/ ?Df31eT9gD3j2?"ŞHaGOkզU<~R?mêC㪝`"r;l;%U;Z+WquߪSd+ Pn$+PLWR[͸!ܠ]`'\ܮ%::Pe0 XHWӋ}pjH~2*kSVUpe• Pn۩jv;3z+''>W 8:K(בYB!+Pe\W^e4!\ jAW67RKUqQULQTpjcKJ^j%|Ҧk/f)H1Ce6pER7v_SZn=0 OiY.H傎Y-CM>w:vb̬06gE^,Xm{׾ev(15jkU\G=zU-Yi˼fB|b'1)<ר/Q\'̵ ΣJ4M*ؙ.KDi+ɹUB`Jp]/<+;U;$Tҵc\ݷV{BF W(Zک6u\JW=ĕRN@Ԓ PTpjzJWV!HW 8>.S JCWVq*y2K\-Ԅp"++ UOQ`qeB t+Kg2jKW2FUq|JѕC +#\ڮ62R)5GW}ĕ7-!\`,\ *qㆯ&n Ruפ`;U;c{I;]j2w^@Ղ PTpjLWҥv> QpRt+uvjT3z+@q PnTpjOW+MV+k z:PeFN+%\\% J:P>E%\`- \\:oQN~2* ]W>|+\\Of>I UvQpk3L#+J/NMrmLL|pwUo],d@(nP!պWePw^_OV} a\W+Ygaj6Veک 希-pWz)K i\+^h2BQ*e;qG\))]`-=\\ Z:PJK;tO+DW(7H*Nq*c\WF`#!\]\Mf 'qtG\Y|& 1;PTpjNWhUqPdprc+P+UkW(Uq Pny2[ 2+T9#̬%\`,\Cf::Pe>*h(Z 7! tpwKڔ[ZʒhiW'ZhhBC倖VM I倖V-2s%t@KZv\hiv!`kCɿNLIѬ `} OJ.ju7yMZLlqquFDC WH!\\- Z:PebJc'+"+$\ZA>F_q*NQ2K@?գ>)y||F"a1M1,8|lRICϞnJՃ<.>pDXMJ=f>f'^)"9`oZp?j&۾ȭ=6QiQՖQoj1`\*ߐqIͨb}:`u=X zHC)v7i̋q@u#NĘZ:&V^Ӿxgdeu/h1zwoMoYDL2oZvfJ7=lJ*!eD7TGzzzJ }<7BTTy'x_8)*y~ Sp}@UZPm=QkFDࢰG*X oٍM D0d%ݣ*wV>_p\Kޔ([#^\vF).(;'!hOן"-ךyrt6#24I-;&%҄BIJCNw4KD,A=r2d*ƨ `ɣ+WECW J`,)]`Y4pu4' PPV-W+4e1X0 ]\Iy,thաz{X&Jhu4{O}HS.Cl"HΠˉ䈣 BLp`~zҏZyCS~(u`zăH#ň `M+ c+2h) JN]!])f42pVOW2S ]QLɻZ -CNW%㉮ӈ ֮ \b+veP2dLt'Xʠ(t2(YZj?F{&JFCWhʠe"t2(Ec+I2"h *RA_j: %])igkΣ+WXJQPtePvut8;$"ܔEs;:=hShEښH3M܌R2M[݌5 `%T4~-Mhk7JY^@UOO~Ы2^p 9p]/{/P@Rz0`U 0c2X "B+@\DW_fHDCWn<ޕAxteP 2%DL es ]\z^h ޻\DWGHWAYDt%Е+Q,thӕA)]#]q.ɻBh JڕA{l8?\%:B<+p<;.ЕATteP J "dLGCW. ZweP+%ר4|tex֮ \M0hJ:]*GIW#!/DkE þq:XaTtJ(-SXr*ɜ 2ˏgLay(a5˸H}R>ZߴVXZofyPPPIQ0@-T[Za\-t4@ EUƭ؅l= {u..[2x4_}ήo&]۞%3퀡)ؚw٬5&|Pz0 20WBDեʵIRenK$V/ ;\mt=2uxMO hVl:E@Q J"‹YD`g'Hb@9h?^p >0]=b?yT@[v2~+.JvM[܏fy=6Q*~=Bߪ^AnvvH$3ߋ;wY+ð'nL/|:g;æ^0;궂9vrUonƷ9qߝ5ս_d~~\lfd5݃0l.\fWz%,5zZ /TN9UZUh릖lv_[̜B[u !hO_]{tq~W G㢞Y~W̮T%GTWL+~SyyǤ^m+ EUATgH.M֏G?uu:k|kXT L-=BU!bawQ4/*<#TUr!OSBS jv/[EԜ\i}3`z~S'FsB=h.9Ϭ'Ayf=Y95#ۇK\*DD)2ޏ${M.pO/9 4vbTO~a]sW~2. __$I\2&q7yd81i)Ԫߢ(AdDm[LP_ :xK4 xRUiۍ3٠i\]_t6xo Ùn'm̾s0nz/#z?"c @Vջ=%x휏AR , >C ØЭi^b˞?Xژ>'xxs&O~s{Y|~Hoo3'(2#q~>y[ )Z\F$Uk\WkrɩDm1'\ oA 4qXӧk9Qv] @T\{?[E?axN\<|Ŧ|>+]qti,Gٶ7{la9$;l>MB~3i]DbaA㶁jazdل2#kEYky&S1 -_a# kld24,f83Fol4XKg|z, $Z5|æOiyp[mk|e_nܼ|M]c?[X1=kG 0X"v?zY[GڑL`G{i4My)Ǹ,CC%iY$ź |JAbQ#q$ņ\Msr7?aֱ&~CQ4ciV'5N8Zc.5:&7V l0DhpN0N̈́Y!?e4GL8,/*%8f)gu@PKfHd ۫^IY3:$M.G@i֏g!&x,W+7B1({|x61@Da^ŃAN+@hfY0{PP(jekuBl\++TEn+kpZTa)Q]g.O_fR̎l֖%7WtI16%)_޵I.m M_f,>ׂZ67&-GCh<'EpuL$ЩdEuL:]nX0HpOX4ѩcP'N]<f)NvJ91F#19N^ >c@ }Vǔ#JO8ZAHY4͟[I,>q﾿vujYE^yػqPEZjAӲLGc5=)YN;D!>ft"Ce"9\Q](Rt@)}ށsخdx;L>$yԒCck^e!I!KB+ }Uʺ$5+9*zW,BJ@.}il@O) t)!Vx#S[{eLmEZ-&ޓ6c+z3+\ CaL7.D-;vlg|8mɦL^BN^n Lj]`er%*cv`sG/JVήi4߿Y[|M& /ڟ:Pʃ,s艳TJ0NOڵ y61Nq|.% Kωg DY?g}Fe9 ώjgV >i۬} eNf g3#Tc2MQ\z66k'RIٳY;QNIYA|Xs?\___i0[ gdM.H-iw JrQ:`nDNʍN$Ɂe@SF4Ihq ̤TqY̐,ĐpʶF_Z\uKٴ o9oIDtc)ҁDe?8&8zTUO:q3OrlEǸ8ۮb;j:!Ct0:+xK~ҜDe *ݬNDf厣Gx5ze8€ y7Of]|ج~6k7d49Dli^ t>Z*P~ؓůDz2Pӿ'/t<26|}_,2i-νPQLQ2SxP-ح<>=|xV|OoY' UFo;KP6.e@eya&E# ƄI U?OZ$ߌ |W{2y\LA _J_\}WpJm*&Ù/Y6ݩAmeА*!T7[!D0h˵`vA6Egx4 C<.Pߡ `ُ^|4;|mvBM-9qhd̰8p5kOQ΀WG9BCz ^hܣuRT,8,/|o-܄ve{'G}mlģWmnPҒD`L (F 0% ~n=B՘%0|/2 Հ1)}`}VX>BJBJOf*ÐInr^J~[" cAWh5*3R@ɵpiԀ^l<Vm'"YA7H_QBӽ9$!1x_:~ZM^ IQ+$*&̧z9*x߅(h%TOe!o6.R=\9V}_&&I- ׻g?^ڲ^^ZR z4 1%3uO\bwC6xTbŬ 10BÙ _ ^ppx`4OR s-?I6CRs(x!Db.} XIa^u WY`ߧӟ*&%x*:n9U\&M}0`X1A6](+G^(Q}j{3  Xa~:㊇RoM ~ `xDK0)Qz{[^8^߇00gv+0A#䭃Hήי`4{/ \dfՐzOr qA[O<[A{zJ$jAÉQӗ[iK))H;V 0 zv98`ls,=c *БZ.< Z;j.}%ޫKq[2TRpdS cn_in)do0@t2) ie=.EX2?®T8dSjPFߪG]a$&A`Fd.Bd0 xyYS "k^ܬh,\x :^ 4Ut8s![K~t }ROx,z !& 8ѡ!TF|Hc/.5pu׺J*b,?jM1u[g^v!dIv/hFD 'Q©-{|ߔQFX(a"y֫қ[?=Kjyǜpj`.#, ~qt^nөBw:gQE n$p#t~cF%]+ܳ/)Xt<v[[=K>O>?3yI?(_華U4r/1G2jgηjqF\$(-#)2)l#2JJTu$ (ap9|GhY1BQ7ASI7ujҤd< B0Gf +E(Pd%YF"TROxf" 7: s2$I2F[O_R} JX.G4+K(,6"\Rf=2c۴[k"(Mi}4]lE%#%c$ Š"a4#F1W@UsD!l0Cm+=L5ڸH,`|*l7MѮ,'{b3|ɭsjH!'ZTe i:# Vu~SB. LT4bIJmoQ~CJrΓ e1hQ 5<[,g54- %G57uhQ)8ݬ i^%ExazMÚNQ3: ƨftiqYSư(uz  Z,{=dVԞ{}1ÏNf/8$:BnսYXi&fvmU1NK'b G<̶(e@x@wsr8oC'#>]v֦Ou1zJ 9U)4ӱ̦F$,v"[ن5U?~o]yHBZ=Fp>> =]esQ^px?17}]=珳P#, 2ːīMק9>׽%ϫ]sP)BVO,NQQiȑI΍ Auօ( HRUmiA$92}L!eM2& cfjxmhqtX>yN埻#S8#6jBJO*hn͗(#F!(4* Cn0(rƒvvCĊZ{HyKX3o}1KF}ԤGeYhRQ*SK sGroͩׯ匋hc\;<˘/ K:;юB3c9X3e~!$"BOME')$rG *" æp.\"2Y@P)X *Syf=翇IZ6!!YA~/5r+g(Ⱥ52>œ3>.h;KںHq}Lpfl`<˅'p3Aa-oG]dR(|Lk6'匏h*g홗Ks{p' ?G`;y >GQ> i%IOp!EK015˓4У?ˮKm|Ag$wTdT(o!Tۣ/t Fi¹RquT<)7}=)ReFjs9|G]uR/1õ{dÉ[θt~pڕۃ0?H ~ڳ\N4ROȨ­{YSF ?0S{wT>$32׵@ZhHR0+P.^zt1YZX]Wp9{9EݨEaKokδS"m10ZCݸrƇ%"v6B*҉;d͏ڍ]e]X.A |NϾ8ft8%w^4f㷛.Jj*;rN&Y)xSyra9|Cgלn{9$#z2 <ʯ/'͸)rƥ60w sr!ZθpAho=`qO#l7‚@QJyYH{}oo2y \ˍrEEE;3 tOۿk𥘎G_Ʀg&^+^Gv{~,y+ɤo_sKñI@U9|Y> N'|A?Sk˘d ǵWRA^|B pANAnԺc˜6/L+9&_XVEduh#2y]Fnt^7 u[s[ף_,]j(8VpOe)زqqaT`֘ῖĂBqzOMlz9*.2e z͇ 6{}M/%!b?kb~N߼a2g|v~k1^Hsĭa^ȇg/i DȐWdfaYAנ}>Xh( 4~ZSӵ2m3o)-z!ޭ CA.%ݏr9|"8QBoV^N(E Ԝu0 $G5XYz/k嫒0椏ϑ e/Ո C _Z`z>zf@kc[ pW*f f57(; їGOZ,AGDc%1e&-Ϳ߿5I Uj0Bhz|E~eMM㹿 <J4W1w[keyBTL=Ec;mky22U Zr}6s/j0TpV鈏:V -#)D&R r=^dh &`[H$Ӯ%~fAg >,$%JbɒTfݖ5<$sr$D?P_o\^hDRPxp9y].\yL#l>F\CJBGLnW4I~[W7?ktUhŌh1rDR"V1)%'-pVN,qf9K]GB+>7иA2xA`6a8X[}vyłTgz=ytwۿVIO 4n$I8H{kJR~%$}?xp Tu+(D('Z 4t֬^ծǜXDDLV}u$jWM:$2hŚ7M"xAd)[mR' 0 ߏ<9͘T$4! ` qEA3C= ƕ~z89,^>WdCd}Q7swEkBTٞD􇟗lwͫ(㸺E*ר!uiFIzy)*Vqc n$8(1 8mB[{Ph [u=½Γ,5g; 4)0=]oY) 4c..GTj$B=ݩaVCKȅY߂V\{[8.D)mWj;Ml_yX.%V4ۥ\P[1@X!E8yΛ6딟1{5[6X+SI%~.<?t-'p܌r#3 &B2W2~(|q77Q9 h=[\74Z@}"Y-WY5и T#!4X0+BXE+ԆD_  "9C>6GQҏtݠC[D"[Ř VgZ ,o6O3xL6@9tx8P^Ymz"HSLh5CCxP]RiN#,OS=ONOӤkY@l\ *ِrYd7>3ht 5JO:o u)6!&w@} 6d7P{kDU%5s v(r,QZ?B~ohxS@] )y! 01^vqB^>momrjV"]NA@ ďa)d{tAeT٫: *`X ~\@pXۃOK催`՟;bӒzG/s=  1TmkM1Ɩ \™8 W ^!ܝNvHCvi5W+"$doueFGTzK.=MTOfr}9;1XH<A?G O ܱ_&VI fb;ccX/]?e4EP+P>>(|m\Aw]d%O> )v/"$Pߠ8=m*خ4~OP߸;Wg@Aw` =x͖ ę:sbD\7xbTeW| Jй7'۬=gg:% VfL++BcVмq~XɤS7 a\b,۪Ve{j¯b.6BSZ`XB& Z@yڡ;ܚJ6ۿm9&g"wD ;>7e"c7sYYsw2|+Ir)Nw.'HkhiTFA uh R%G%T^I?Xݵt1Ɋ.ɔOE\&r-P6!L,%?k[Z;|##r;HD ,O 7nNO%5L)1Y}GP"+^DKޘ,>囋N.)ŗ{:%[" IAi ~iV.GIat; ՝V IxśyE~`~.` 'p@w{wM4O@E)@G׷B+&:WmI*1?"(M UDm9L_ܦ*ݘ_d@=h#cԌD3,7iS2ΰU\;N!o[0VD]U}0ԓ9Z,C[]Th땜.-VcbŊN2uXVP8,@< GV̷jE-&iςJLX.pX~Qy_M "&M44Dno;)pFM brݕwm9}\U &)G7Up uy` 84 |w~ QX'"牃IVYWe΋)bL/0SX=LQzC̷rgœPaR=1ZkG?7}J,eCDMKPR,'l^>,rTzfzhfQoϲ\=0-\K=oRb~B3-ށ~(6|]룑Jʭh:*gim,gsewAP;Nկ{:fg}VlKux?Q8 gL\/0{`EnP,lIvKϡ$K$JUrmH ?<RkIM]O|A5xj0j$5--)!%$-m}"aŻc\$1 ywÿM+XQ8jǨ62aCV &a=1ɵzmb 34&xJr$[)cf8O#TFt]͐|3·wpNAcG ۿ(aW!ξ˻l7?4,k}fkE3PfGe`]xy[9p@a@RYAyuvӛM,kR,7=EhM^'>P$+#NYpĝɈ8x4Yבyd)8Y7z c*m៊mⲽ+^)4 !8[ﲔJv\^E03٤kPfZC-@$:c-A"'i)kij,JtaT#H*H[Mx-T/%D^ZZ)S=(HE: s{d1Ս67Y-XQ !8Cv `] 8ߞhSc,kױU yF f`=>imh~{LZr7SnY?VaD^_JFZul=M3wc= jVLwsZw9 6 9cbJQ2 ܘw+{EP8pLHYR+soKkUXC_СƖ^-QFGTEcY9_xGN+J(gO<)?z/ة1l;vzF;6X)XB@ 6Xؼ RHD\øHGo@9iKomQy tҩ@ELhK'iwvCcyڏ)1Spp]˂RX +46D턫HE*täןj`]6·=Ow0yJT<n_LPɼDՇYH DsO3%2Dd1!K+06yʖ|et[ΆNz:7a.jpѴ|6O<¥"-оm9\.@nEK$ASeP -r@[ʂ.k\ ׀:&2u!J8*Dyg@`i ̉BmvocT(i\P"Q `.rmr$oo5W++ NiBiJrT5Y a,/Eσk]Rзx\9$T\^o^r) A@`b /$\eB`7Lb~V[Sh (5SmsB^c:e Ef(\Y |JsBb15܍A DPBUeR8/Qñ]ZIk;"e*`} (+ahB[ JYLhDV nEMbشoHw(/gs>mфZ85"9֚Pyt1ӔE.P▣ɧ,m?"a{HaӶމvSl\h,4#kj*ȶ}Y>P ENEP<\cqQ ƍRAHМQF#:7%t]Eo@0%d?˾?gy&6J qx-ʽZ[r1?Ʃ{*&pa9 2X"cX\1).pWܭ "yB3e>M4!p<(r1R:nӓ2(yR2qk,>&ek N-KպętZG6i 0~(b,, Pb \/smo SЄz,[1t݁ +M{w3j{R{nX|z.j'/L&{O9]MOO}.k'%%A"A\Ik(4ƺXhm3&x^BSv 6 ɵ}2BO~B#_q<2h-&Ҝ*a16^ H:gG7`vhZxzjsa)YxB6ԾGV\`NiAEp\ 9U[*Iyˇ%A0eˣ!{w7Of39-AtvG7ֶ ?APcHTY1-$>k}X۷;!Ҙy8 Zts; x+NO1m?Z8Z%4U,xٲގmS h>K0ѭ {s\vfKR8 Uؤh2zOh:#O>"nt$nTeZ,9s(PJgKyJF7% $0P[y 8_Ѝ(O^ֱ)tiM瑑n5ѫE&$R B!'\1x_Fsު|6RKTV*#0SiO'­.]%] /wu`tg`grk ]g hw7І:F#cPpg@T*n/˨Qe g)Sp%.eY9fEIHս?$ ] =}O'2yHQQ!Q(FI I,~_B]O1xS/QI5:$ ώ.cE1MeIςQ愷5R%ET`w "nT^8LG5[<grXWD-T9 >8HDB%#$ vDE>YaK,p.R0Q~ݸݨ]燓M?eC؎lpLf pkB6[ܻ:>azH_!;a{L]EhO$EI%TE% "YqG~1Vpa$ID\RF0Lx鉏e'ݜ;l+e+jE@^;t{U:geVfz庽";3+*1垏9#~<cRm +ܜe$*cx/*FeS`*`IjƭgzݨLOˈ0k}fF$Ik8%Pƶ@Ҳ&`܇BG,1/6>%4W9+ŊѻA'%ߖag 7=M1d %h$J ˬFQH.Q6!1ּb>'hHݙMChQ651vD"m+OhPآn\VDPOGwMk ȋ*W _ZU_ƦBbv*Lw4%EnBpkCV{COGA&i 왾<9MefݵE-܏ Tnw8i/u۪WP~ȖXUЊ6x2a莕$?Y'Iw-qͺqN-ɏ d֍)j[%Z63[,{-qp:@^,"*EK*D\+ LqYcnȄTvG7Du"zZ 7C],F|3X[tJV 7LIQRao{%K )ub[y~k}t"h-Dn:]֯IeX-^ƌ ˁVQ^ra< V q>83x\ˈ&?z'doG {k·{~-+ ޶*.e-,\Bӥ~I; kuږCUz]J;Ljud=5{j#ŒVK/KaSRrl?TiuMζHٽEX+˃ۙ$Mvt~~zVGY+%Rso6ǣsi)Llk8rT}DR8;6(R"5:IІ=S^om;ܸu/nb=7}NOy^8lG SlZTOYwZSW6mBB{J6Qp3=sUV1>.X1Lլ]fZ/(v* =y.닓c١~φ=ץC,3N< pp@yXRgs;<8aX>()xp,KmmZJVF30CJHyf_n3qނ0Y1lKP`'8?=˟7lw? n ýԊycQ/nOj`k"yQo6XBðI"h20{j*kpBOMam#EAÆhW֘/6RHWxB%go̓1H͙R-N~FAWʚMJVH"46`Ҍ x AD>2j>9]10̞7]>m\K~5ޥ%Ǜ<>j_ "= 3׋ {#*1VxAS+TJmkc^םM䢌tZZ~Ok\ ^$~帕 [8e91M CĊ4`ih.`Ay>yx礓&Ǻ3 kp}32oUK_sx{_vJ96|Y^dzԳR(?KbT%b5i>ȇ•5.ӥ ;CjZFE}=74G"eo9fGkel6VF(*YD^GRIכ+Bljww_Ap U@satڲ" m+s:k,>q rJ鄎ϱn6=˟(E\oz=bRG|bcL*ˢ:~<0@DmJN0)ȆFlŦU$<t؝4rsK e Ns=YyPP#V VRvh]x>Cmۓtoc7챧\R3XB5 q)6+-֋תDT@f* vn ӥB +8Lm6dܸ\pO-j4@*@܀NC%KMdՊk[%LO6:A7@r5F{CKw`C8;K=洄7B9 0BP sy9w:R8y7&|݄7mf\#| g5y\?&?/pe,3jyZ&>%f7p>#+ܿcnNw(uܰ{Y[kBdd4k-@.{hf<*VWۛkK_p[І=QBs2`Wq˥ƣKýR'Wq>c"5l]mvӷi߰ǻ'ZIwDި /0k*Ofk"}YY n:pY7`i ĬTQ@As:Yf%пN70_}돯|\{;%lRRU:.VVqc99C[zip 3LjNZ͔ {ĉ?Vgt=po)ʋ!f2nkxF%3-st_NUxr=fcØ0+t* p‘*2DG%Ő )cdHFBZ?H~|&doe[d'9Zs;-r6b71_칈7̿&毗m#$i<_ up?]r9ao6Ifp:?=^aք(md wAlPcjad-@mid_V_Uй፹<P*W;wk@1jw q쬏ckX%9y(]ȤZ;5h_ yxMD 6*~MZ}7QKxz<" j`ʛڑ$ǂ01+mn,5Љ@4M8-U{δN,[3HJRD&k-}7%}7.u"\h/mأΟh ^liFk:T[۰Ǟcrap2 ,|DgRx )I$~{<X͹7+q;>5ƭpI(ηqws%Vo3<8ɓjӴp|&r:vM֮5 a\ߪ+9;tJ3+%(PyoL$j 51_?R}cvB]}BdU,PTy>щ!t W[7|q9O>>җ 'ɦ[PB/7RgvS%J$6AKs}H62wԶz]X>Hz!3tҸ@{EUGl[P]ܕ7;zc3]~ &El* U8B`?V잷s{:m J%W(mqHy5ɷ#pa43H&R#g$5DžZĨEud3*t13qS(%^h9519C8='A*y;IF1aYQRk} &xm}7wÀ{ z/HqEܣ0+(I-"$rq`7rJP"X-@QXk_>/ҵYE"݊_y?|>ovGfwQ~zᓟ!OMt8XyJ1s$)ْ!ˣB%6fzq[;Cs-XEAHD2I+IRh Ѻ]Ӽmm+o[| 5o[pI(lTdJǷ{w kAWrөD )Qc`htoqA)! 10PV*=Ipip & L xB>g+>ǀ ki(8}BCc.`,"x`d"/h'π?6 w<\R ʦn&W$|B<НE1$Z|. ڂURֆe"h@pTA^ģpc@W,˞KBHtԷUr 2P^(Cy&W0џO3Lz%ZBoڡ۳z5*QID`3Hk%JS|v,ଁJT*窙q'q &L2C_l9dT>+#'͝ S8pЍ4i98A`ԆJ͔_˥C9´ IL6J\O Va5"‚`HrfN9-%RœkFCːz HE/@_~@"HOf_k1V2E0B"I8BõYaI&0m=QH2ٞLNp) "/Bʮd4/ $:'OZ6~LNK5+ǫA'ͳ՛}fg6pYDYe7\Dzf[E] /:=%iQh "8L""2h&dE[9LcKJj8Q k}RXaS4D09!`C44")=|cf.t(43vVoĥhGQ$cp][ͅ 8u* h|2V28*.keW" O3@]̣PWmV,9/uku>D rª"d Llڒy2V^G@VF_>]lZl 2ߒYl)El>-8bMٰo\PLS\]S< 0[dfފٖwaS{z%4 ZLԘrnFG6Vs&~o4i' s? kTMm%Exiq.v2gta7-K(Sajlz7nG0T]EG]L: b0s.CARHcy40iQ-2ȨS|94"'ǴD$jN.r:yu2Xq}/`. CR@>3cܚ_frhW3ڬPǀ}ܕ);Kʼnt S2tiOi%kѬ|DZ UDra )pߊ^fBra圼Ȼyx$r0eHUTd&SRzIi̟ #5 `8Ue@ϲƎj__dn+ʹG?ik2y38w^sZӧArU bJrŨzIXb['U5=NH̐`uKsIg0.Ș 㸝1y@g*aI_w75d Yl%e }U-`,%KgБI7ob2.뎖'hy2:ZwS;VmڻV1%]d<a#<3yy%g|`x>Ȣ.60R;Ap"_JhSD=el& 3e*2ءIt"F1[Jj%co{ώ75;pmy۱۷+-EZs>oákEvB`Ro#Zǜ=CCr!0KbhonΓѰC8bi+t:yX9Qr6}:|RǛuؽDGfVP|xqOۅ Yմ%MKztt- \ .஡$J`; `r#r| ypUP]ϓhA_MH:nDRt~Z&E",۹hKQcv]i'uΩytXB+GY^bMĢbcl$I%]%2!ٰw%Qs#VSb)K>ثg++ Bu>8FhaVZukX"c@r#H8Fuj_1 >mY_BLP}#6%k!ih%uuپQ!VQ(2CX{qcups {^{h;YYg'6Y>N.>hc^ľ:/ ږW".?Vn{߮x~^;_ gS6v]Cx7 ˔OWϧxeGs߯~׿s/Ⱦx%ӟ>`zk<8: iW~7ܾA ^͘bu{HJ74:VwG?W{ 9z(.enn40/ c\C>q%^(9uK0}#;24׺#Mmb6F~8a1ܼi6,%H#/9yQJ$5yg[@>hQu,0c'Rڳ7>Z&2I؜ @~/ * )Ija& IT1u6M4"~=ģoʚ\|s( /Zz˵%PKH`)1hzi &`s=ǸxY0M{$Ntٍxw7S/+esS1->Ň3'ዳ8m[?v1Ҩlbm߭sWo~6\eO_f;zџiF<ܒRO?~#d]/ͨcs ^ëny]G_ܽë_=OȼÆP…<io::/\]Ge ':[#vpe^P[MZ6VT'VN '$/VuQvϴvξ1!܈xH\5⌷' 8w嬩)uڊ/"Ul0`buy |=C7z?N.:t}:kÚ3 4B>{LPw}ʷ&Dy0hw#$0D~@ XXF5D]:8Hb 5A3Ae}e.*H6`id 0+y {V*KRF_/-5qUgl`|IO5:I=Pww3&нdv(1D,^;f׫jE2enji4Pn8F3ν#׳Mz[Vn+̹2#!xK8>8y7𩒋_[ #VkVJ׫)ޮkFBI BJ n_mV戀H/AMr4G9GtA* nnſA?WCE"=|mG7M*$n*o? Ne ,-zaʯ4}3@V|1, 3zF6~D>@FעjpRReBߎ$aU=7_32O@ eU^yQϧFۇ^5Ng{x`/bDb(0UA`xH(>Iڙ ľqPz9UBG8LEUB>|t)@ 2Lt)E&PDZ/eT+d$1:V,ݑT:TGdŖe'v':K(k=& D}_6=lSl( !pqgf+r~iV\B2"Rvc IGR66tG>YB2;#`虙_*_j/;] =rnNXMy˯sލ!#nDس V̌ӧ˜BOU2h{%N\Hp}i2/&*WLMiwzI^lS}cFu.UKJz`Mɀjcw/ոۣ8JݏOfS;ϋAoc ."HzمϠY}3i^QhXb[OQ xLgC9r.֣%瘣w|Q#ne I„]% _ 802wmu9yA'UEHf}`؇ c9Nf~u9>#Gmu8Q*梮ǀ.!1jd9Gu $[}UA8:hp$`6L<>m23,6w)0(1"L}GhZBPUȮ)cc& ̔3^bf&DqJIdCkdQp)ٱH=a_M6 Lpwqt뺩K1^/thE)sD*\ i"yS9yٔǠBZ'}'+br, TWotDLSNPۓ6# SIՀ3/T#R}#W}1V#{czlk_s-ebAfb& 0{mgp NfŶtJS0P ~vv#V@8b: -ybƾzgLJM;|M09Z8qwnv[au:pO 32SY&$~'1D\P 3;XN+dvFw5"݋Tz4U4㮰,A5pN3,P4781Mx4f>!g&,)4-[ d;O347tF1Ì0[]ͱ=|5kUh7( vpAn\΅8sGps^^=rYNwߥZ6獌W.lZLC/^i03Gx^岳>MgәaaqW/h8t bfΛb۠݌}&A/G򘠝c#n0}(Qsv)|(h&fw}|,QAo$6hC;G}<.|Cv!A `7h@(( :0s~7/`nHIֈRk%*7Y -d ``~}6*巇g;6Ͷ&%/c)&KZnVIK+`ZXP;U 'P'F(l5dv3)E0If213w!ػ`i31lgpaO3 0|qL:R^*ߋ_z+*/0/^51*<{WӥnQM7)Z|0.@8WlbnѴb1do4]2F[R!=Y]=9yg9};qՉAVa#w_%_Iݎ3c2sF*i[[KA3x6{+k?AQ4}', )#vsdRA*L`HV}ncXw`Ǹ8(%?XlSИ?hzK~ `kbO!p6O˜gkڊ2ɵg \袻amo˱oLfOsU3}Ou/}`;F2œf^V.|\ \\MLKO~If2N_՞ƿ_1l;uLUT4MN\5ll3`]LjsU XVk} |5^ L/S#W~H8>#moGItϟ hdhxWy$yJRdEt) ALf409 NQ0RKv)v rUS5)bx>5>n3*t⒯G7G} /sx g$(Vr-!LXΞjGuqm /!c At>a8W^ҫx`&GP'4@1%f {5^`zwTb T[ y@ҋ1fCJJ+Q`AN/8*Fi߼). ONh~PbtCK$`OZ0X\dXbPalI$0:F/O8Y+G\JIR>n6Ca-?O u~њIhߟʵ~ݕ"!i>&$}tiyLGG9J{5'1[=0Z@7؊:&3dfb7n5#~8Ɩ}̊Gbx0{͆:@3y6 55r [}C*Mx'?796;)e~$UTdrȓ7QI" CC5G.+Ԝ|LΖ s}dk JO~ڌF vH%_M"Q4S%#emJ3Dc[Vt $Y VP5L>9Os9[䅧FrHf75%Fp`m#n1M% >m@ǖ?2oeS&0\Q+v誉)0p&fkGMȪ˝fEG~9IV@tCs O3-fxƍ 4k| oz~x ngۗ|5p/癁l9+Aܓ+^;Z>ퟞ%>Ձs[F̈́ }Ő%:FNkdWsvX hRE6pQjbޥDwP_Fݛ| Ix#3GDogMfHv|t@y{ň/V/lߟoy?7E;9 7W5b'ߌ`߼ߤqۙS%/qwPnU^=D9 ~p O+ o_4lpo~'='w4̹6VI:q$o/.Y^{/UfM[{+A^0r\Iᗞ==tF5"@F薮`c| Og1tB{H)ѳâe):Unb)kЩawjE@"ؽlYFSw%ElBiO *eP5qS6t{ ߔ_vu;/;ȣbKzso\P5ST_MQ1L*Ucu>f~=2 ]K3$5"6o܆HNV UL}Y5KcQ2[glzfUt4UG\$X 'IL1_r9&#*>T\#3WB`}Źm 39G\|5q.0Y:f%n~%$'N4蘁#,r{_ ͪLJ==;#3034R4trcӯYS[Jz r;FE8sbuS4Idh֨1#nZ+ ߮ےK9|6p;~ʆ?:O}a2 MW Ewa8+'}5F+|IO<_MH:C g ="]3s;iՌZ|!0uFF)kΌu6YqݜPCZFR:.ZYpz>C,4L5qʅfF=xYɾ铯e7$G/N`0H> г[|%==RIJUeIY*]@*A2H~ CW`{(@lSفWse۽~-NNriŕ=6SvxG]ru&oeAdE^P˶χAX %qV۫4M)h[:Gt*KB`͎w_tN I3wɫm@ ռAA+wYvZMk/P fb$ҬB (}"#&/+4gHb\!_jWj"d+_Ph8 T"ͱDSvq M2D74&\l0YlfWtx-tB_'`oo+E7Ȏ-_~ y?ğ.EgYBv^PN,O-0'pEi qq|eޞ3zٕ|:o,66F/g"VX2nώ3'ɖ׫lAel})RxcoȡݟtfNUSՇegﺈi'WIVtyG4sRС"mw.e *5*E]{%:tCiœʹFQn.ޓҢgDAeSa]HI)Wȣo¼0L| X|`XkL a&':C$生!}QSNnѥvoW紳&+u.a&d9 u_rކQ%6*.RhE9YIֳO:fQ -ÓB y+֙Lb0)|b֊J:ADaLH.fM"#M8W&$Y-< SR^WLgML؃׀~R^*E?dX]^ܔW-$)akhN%-o;pAu}IE+hh\۫Qԅ}ش6IQT]XڔP]ZMi٬,4g; Ť{muqw|%舳}oV/=}feMGVFe.=抪x]1c!btbR5&P6v"kWS͆E j&esPN@ \2㌂`Y9Vrؕ4k(P<ܟR>gEưig3s-h6Co]*{ǺkhZy_oT2yb;}[1jBWS#x9 z=TZȲh\6j2*-Ž)@)yt&8S] 9J5t'2asz1>6IH_D*;qcBgP!;H^ EB>rwIȡgWS"w4K3a᲋v9IzK }q W  kouAB#_zw\5H&Uxx:6(X*ƼhN:z0r7ϾY$~ 9 Y{a7"|[ǻ;$^Kk sȞhhVl=1@\51WM ޘA m tCG%mzqCZ(_cq[@pEg: AO(*z饛i"sS#^[e!QP낐Dd`FK)}k-71axe3ֶsŒsȅd@GGnNWtx'L0!dWzO=拤o3'E=(jI?C v$pbۄ96)&fV=5 BíMm?ĵ9q}q$]<|!X,JԪ#;꘳ ʷ](mK o-" ͍RCL$acͦI]Qr-Eˡ/뺷2{(Ivv/'kS^M}gu>V!GYVK _O:V2xsWyrp@R۾:i)7Ͽ/KcٯIWY0/kp*c{į$חC˷"ܻ=TsɾM6aK9JRj mx&i?ٙ#gwYۯ}-a}D69yQ&mF5k5[ْ!UX TV%Ac.|-c] \ގ-o96bцx&?KQ* |>wm[ep?macx< >^hLJ53r^طfFlFwg,KV 9xvoV_nbEx@Sql|tP#W፱su"J\%\Kӓ礽CJFyPXb+>Po6݃ătڔ\wk#&7~]_#!oR+ gr; {T* euS֞s$|oV}+{UvhqFyڴu5g.HzF҃xˏS{ Gc =oͭeس]l!N6ucH+sbߠDy~$.2.v_K%T!go.b7B5Ap8BGzHyEsݤP5@ݛ P ;gWZ:)R,̮7oÆ^~uTn.BkhG?\yGvޝ/Ο_~v؄mMW?hRdL}? @փ֘M%J\"yrBVd-VxݩhQ{;C$׷"buQ(Q&J(("bDVMXal$> h+%*LErQ'g",ɦJ[d*yΞ; e P]WfO.&':8VRW1"$t\Sq>V)k*UC%fkCڨ)%;lD&;-ZKBN P#"QA/?9T%[!bD <(s<K@q~z|HJ qJ#33Cڂ2.Μ eѿ_l\R7u_ т 6D(s.~"(tNjo7mǛ6M|N*$L1+*ⱡA;`APtQi3 n~ZWQ=^(GShњ(hGU>*oK?-s/]4<Γ|gE#qt+el{w-c/m"{"KAїOgKs Y˲;ii#?2O~R}xy9D_5}'|,Ϯ[o/ޡy5Pv&c|F@*gb(@1Z#?E\1 JTmȰ,eg(#EV>5 @S r Dԛmi;x c{e!Pnĵ2^r=静mCFf%|Y37Lx_PjrSQ:SQWjzIF!]{#P6-F>x4+62쾩v5=>q~0Ҏ@k*^ZȦD&4w;3x9(IV t6-$N /%hw#urEJ%ݶe;wRz^cՄxb;]9wmI_!r!pvAd ~ڲ%"U)i̇4Ԑa[5_UWURD`D_$RBu<;/]AS^vBp;TiBKNbhȼQ:CDcet=wLXD-=y$I>3y*TZ@x12I !W#L@&& RBd#$Ao$22zY;5Q@1닑Q4dX^˛JZkS"W+ K4O*&ʠe=!gYZ(!^)'%Ͷ 'ߦL̖'ou v)R#Q##8FY KF+oU*`:PC`cT@?%z]At:j 5_90D&Dq dI/R uɘ@Dc#M2^Vy^Y[ٯϤT 2ʆH*9xV%FJ~%E*;ƥ +Q!yROkHl@IBV>`n/*iyQ#g%F4N7b[7a2D'ۧ>nv,јn21bxwɟF'.=qĜB{-:񞐳II>};[Jwi;rh0B!%Lv:>]R&oIQw_|T}4NT UQVx9<} Lͧ)LN4HJU.Z7fWDE2quqF:KZp8gݐUD]f@PM'y 2$ﳚ7e{ءx郚ˎqΩfCyh wVTNlC낔Ψ'z/ۼѓ~WAh1 G}b͜)ѣVȔ郬V Z-dfޚ*DxK&Xi?읧D|M , p]gYh%)Z@oq1,JfwjQ.9eǑ8C4[&Gx PSHަ.b=8ll\AYkVLuJ˱/,ٔ~& }Xl xq 7|x(06}XlbJngl&/3&d<҇ MlbէI {hNc jNL* aߪvs0j툍v ډ2c6;K42}>zR}Wcb4y|>R-+ Pf+ p2ݽ!}XzڎKf8f\+KIۻ.2O˖rۗ88;׿;gШ WC+JfOW Nw W;-^ 0єiàv !nmXLԦ2h!SYKBs V9LY,;RMML&,}B**&+’ L)^J>>iD8NgtՍVnYFZrDЭϤ%T`Ok& nφ>Fb3lerL>m;y{ZÒfF05k ?m ۿI V7AO_ ?[.o/4"tGK { Hbp4'6GN~\,8>6fc#>v{5l8."凚z9!c{USTr.nf>?zgf4(=y)7: o#?w֨?_da |,cQ}>dΌ eD#7 kuj"ʁ;'2$Q0rD,H,t.EɄ(=+Ic0fL)"'& $fBG(]&dz4YhG ꎈ>5ew$_Dyr^MDz5W4lFVlҒ7!Jؚ+A-ʑ(}P":wfVpm@IřI ~|:~u!DF H,!]]AUddϦ$f%uf71j+i8m't{Bt uQT2  IG}2r&hn<[U" Z܁1&W d"ɖd꓊0CA06HPY[bzQyJid)phC(%b1fRZY.\D(V4®Iɟ9'!%L |+=4:SN4lEo gF<R HT)Ȅ s\RLe@oa;rF d5BR@ I*[6ra%3d7ILF he'@AN2"'f$ җ.qjV4h{8V{l5vf'1j63&άFO0L=Jw=i4CgnÓ 8tGd vY^}xL$4EJ6R2 2g" F0vZ2 Ade⁰HLrqcN+FO"ґHdWLg6)EЊv)ְ4e)<@U),AYrr y^vu!* ~pQYhݥE_iKdHS`!=ڐMHpLVAtr5Kp%Wg#uFD'?5g\luzN2WEh4- $>Ra+4V7FcڿҘKZ\}p-NV WZȀRu*Kq%|-V""t@.9w<35"$U !L+htXm-jtAZ\ _ޔ@^0ڀOm Lf8z0~wgwzC<}A fל\4FTFq4Wt>]_ZIH 7^汞m~b[LWzY=.$ ^^0l^1T2S)@5*gv\'wG0;`ۼԍ:qWs1!2J ùp38L4"kvVe ǔ8_Y㒶b$l1RL8qhBybdRL=]4꜕%sU;sUncDG%FtTbDG%FtԌ5sU>bH.sZ pc₻(0`oZrF/KfMUܦ6aY`ِ? t=[ͳ Y1H0]A'W'gD\BnIKM6r k+׷M K.3BeJ7`Knr3A A4͇_3u񼺷cmK;{ѓB/]`-jVRoMPqgEMV>.H錚*т*"pSsɐBv.hMF"t/E'caM򗦬L?:p8H8bgS1ÂRKI9q@ |B6$O@E%,3v{MhH90ȃ2GZx";㳰%^D2 V.R`i┮),d i ژLSQ Iee }>u2n 39'=3uFTI~,Y )ʎNE.v}T5fV8n{gH-hEQc bsoZTB bM2?xQ\]e 8yټg(Q J:lvzvT/ÎeQ,猪f}2dsV̱{c3:O!̐թ\fx͑N!@;#=wF6mQlO<1®hc*V1-c'#= Fxȍn(5gπl&Kx()Rql#xz:^AP}/#g'7a#XPs:xv d&z;[~%'>9z TT:2z.g"ħ2KepTeYư0fܠҾ@Jq3}, ף7+b ܑ\otz-UnŪ^nCLTUY@dnfo s~[׊7W-s1:2 mͨŊ07ɴLq&Q%b-O{OE:Fч3~%ʆ]ӵ+Y׎x8 ( ?%+%3b PQP;JK;F[0Xjo -> B; ѡ&1۫{g`%5J% 0=a/84BeXz'BnũO (gvh$ %0QP86 pa*o@Rj \PM*RZ"`8N! RJ!(c$=V 76cF@ƿ<BF9Bd>L!KA ό9[z zAj FJ]*C݄c7R1=.MJ4[4qC\531æP~n _wIW  KF A|XSNj)qx~FP"W63yH e :'Ev4yI>.eq\wS֘ykK*Fbt*4O&I=-}EeJ`uP@H.)-DNzFJ0&Z} J{.:AslTidT$vD=RJ`0)XD:C 4 UEI>䕟%&$cWh@QR~$*MSI;ɱMSXEZ2w'ʦJm6cJggCz~ojJu5 H#y0%!|S_f\7hXӳY})T>\x5P{$-AwrBuF0qy*wo]@&7r#QNBgY\&3'N1b3ЀDN(#o'Q ʚ0V`vq9,Q,ytw`801%x/€N3-y{yS[iɅZT/Y9 PR[3= Yb5A0]#jf&U/t/s@ "wDa}gn}s$%MmZHM OYױC%YYs |J.kR F(O UBfoCwJ$W0C:ڔPP;ڃtG2'Hҁ|:eT[b2m-B(BFtf$ql5HBMSRڦD/ xY"lsE9#y7 #ټz YGhJKFA2z<$eH:Wٜwߩ5P ®r!|Rqo&DnJ\͙WJɸPp~{f>Lq搷"\ы{MgH@TyWs@ ׿:NWkP H+~4X l ]VdG=J>>i0!vg> WnF^8+ZQZ /wyG=Oİ~,fTATzK)ILg)H%ao]|Ԙڈy8#޸/*3G*H!yL_#7h(:=RIfa){/hx zgٓJHxx,&Dflvu6cqXh3k;t掑i^e #Z@"G a|oީ~6ibd6̷`YxM&çLDZ FRlxyje\?5~z׀*|[3M(_@ hdUPk r7PgPDJNf! #btj#zǹ:Wn*/p,G l=}zo3Q71ߗp{}iޟf0EK/tW.L=t&`F857"zc+Q2PX FWfD9pf/gғ)KW?ƶ^ v~w XO-\c"t_ Ng'`E"AY[!-zޚYЛOgA5gWs?t{ê Te ogk_N q" ߂sЭR75Sn @'|9e4GHO_Љ?bԟ,x'W?ĂjH(ݼ du6l9wzv=>˯wds Kp_$/>'>~zٿV߽5{C iatK!^ 33`ypMߚ;USr>a f2G:W`rZҟ:VMn:ހ"s3L6Or 8D?.J{7 鼺rWu/x fiμA }Qo6n|za&OoW;`?O?a' +ëqǴ7@[bce=-{4|3οͩ@(1$΅uvSk+O89iA(n>Ax_Y&Ѥ|/3| _:0?hO!$§" p 0Y\lxzk3%U_HsJe0lN i2ep䝟=+d}b9TCyƗl=p5_eBEwIgqvlc1_:}.mⴜ g;'ʙ8flІ80 Q!i$cʲ@1eɸ0⢶ ~i0-'xzYԖTG;NǤ_;;~^GOnwiD%J -J) +-LJt]gK̗ y6`LL9 F2)hO$ {-CS k<C'ʊ1Y 3a;oh3a;v !ͧ$abF 0F@;θLN!F`VY~)_gJ,MyG?Ihޕq$bvyE`H`=emyJm2ٔ&""U@T:/2##zǶOS{r}al <`ׯ.ʊ+ŭPT%i}iQ15rfX# # %?§ 6Cd]R-`41%R  ZA%ew'Z3>##..ﲏ7v/&2 -RTNr>6kќ1zd;ͺO$~ZA%Oz&3 Lg=lu=A}(.PILp0P:T[Y1cu"3ˢ p xf\^:FRPߕ,I]ɞK\r..ŗK\mv%?>6:MͽDb*0sn50*)!Sɲ]vA5 IW\ Tb${&=d$[CˋIUD/ZhJ m^ڮRJ્g%K쥅n1̀E2ӻ7d0YqNp=]激!H;ۢ?L?Њ*0xuIsBH=B1Bbz 4:!1`-ZI[31ytz2`%)PB4O(as0^ڤ1A.hkd.@2A_Pc-'LRαv;۱+_r6~t,6o޼yb=Viұ׍elv_5MǥsDk8_)XG̋z.TiMx1U&-~ぽ}Q|ئʚ@R)PC(|ugS TP5nva4afuLMݪƝYy\b:ߢP"'?dnZON' \op%C3"bl"^pȣKTO݁V#=C7rQ|Y(ޮFKn qٷ~ 7^WĘ(OH!:WXQ&VEcL)gH"6 jM  ](ջvߌ'풧MǞQ!P?O0ɪ(ŗgϖ 徥/'%ooۊkmzXkf֎"L;^>GvKǐUUO.^2k>Ok aG7#{7 JN[Us{;c\Mq.6QHMg:UH#\w> N]nnF&Oz$0TP 0#3.w#5tFX]ƯB<#] h67zFfԴ`.Z) Q8%'ap<,j8DGtyqdLllv)Sx@Yf^5A6ho\}\ fwwx3(9?*lpO݇_Vo;> ?;|0u^_  E"Qp+$I(BhvйqF8<i'5gf+ld4|h0)0(%.j!0b.ZnE¨1ؚi,a`4Lc9Dci,$\@4VC`h,$X#sOND[ ORykBQn+ީ >+Aa6*t7Jt!EiH1j!3SE,=IRGzA RJ! N͹"!O EE3sLVHЫ++BQvչ(uBq 0(Kq⥎\ےe~qfCll3s6J9ttp,^'ϽN"2)uL2 ̲H"1 I^[O&N0JwھӭXy$hE6*"13eтӨq5&(w2yZGF|?H,ޙ5r:/a4+!(!~I2 mRY6Eߥcr3cYw!YUѩtf:s/S!K;9:t qSLk MJKCh.3/S= ZfI)ڦуmЮz'RdccD s\4WUsn\u N  ur PK1;Lj+.וBfr~ 岉H\Pfw@/)բBCt%tw^-,<:\]ա6ucD-?ԛ:oݞlAx6~sg]~|}3 ga,܄꫖{FڣlGe zkZط𹹒+G G ,0N#hDk Y}s`#d]O w&jo3`ɔX Sb? o4+.+ 9Gk_ :;HVloydݚHw{S?DyZ9G~IdnP~h-V +mcT3^v ]=39 # 0U(BOw>:_ RƷn3V򨉛B͐19b4Mvh2 BJ\lÊ)^@ȶOi %y}֎?_]MO;P^XPMzj 53O ._^g=y#D 熗w7qpu7|3ajH?J+O'W0烱?QϋߌPʛnygu*w<aN `7i93^NXa+!{L5%6&J(+m%H.̴thVKK7O>ޝ'HQ3'PM']i<FI\^AĚet,Q,4Wqkv Wi˴ տ=\MгMX`(d̎Lq*BgBk*w)+ FbN~(MM9!=?Ϸ7]t&wK㝍cK#h)sJS▨3TKVsn709bU䓭:;5hSFZ&gmF$I5ea`wM<ՇFGkċkRݜkvH U<(6FQ-$֯g\١om6_;>PdkZJJuq~-<3rF@|.ȁu&@lm2{b|auQfXK?V5\[eSm=|.m&!>9Z$jQʼnڟF7_r5:yzrh. $y.\`>ɎlE쨿#ПyT{e?M? 4*l #(w V[gN1p.pwP`^@Cc}uv7rG|N}(n>0xnye;> _xri〯\pc5eJj"d x W=9֦ i5p#RA,G(uW C bRIݵRnh%5t|'v1!&ĈNFK !6F%2E o0R93 H\\\LD6pkuBnG'-WD-Ro%2]-@ `OB* =xAF%F 48`gvI坹@7E@\d聉COXtٻ涍,WXyԈR/dƵ+J 6c3) "A`PD l4N'"sPyqCl42PT9(j0*a8#!0& &ddQHgAc{̹ ı8+B|Q =0õB3E dt6S&eK2"H3 ~+ae&R2!)pC-Cb+˛g0(`s vLdD|ΓL2n6qv^Yb,P;Mg MŻ}1 jY 0w#/Z !>]##YU瑈c 7S +]ADjlFlcx k֒c28AN)wжY%:v\9WJi?vLj_` %OmJN3* 12ƕSr`%b-A^{%MȈjeE({s#1Jғ~-&M(O10^āx(3[5R`47 D J.Rh`mv#Np x$+;@xGBSD;V*kRJT&ś>6aҨvE aW .j{V)z&@xcC/]G I]8{A DR R{Y 7=ʯ!eduUcF͡r+U5uOA[a;r5_^9'Q?WH; XuGoP:=P1tZ5"Xoj)IׯRy{õ&m~Ԅ*"`"ioWɮGX~.jC+UvvfBՄ5Vz<:5wPr8!WǤlY˷`Yΐ_gk1{=NTtlb^u p=nY^g]rNϘbNcrlfGT=:=E?% ֞2$:D.k %{kZwLZjU/\Kŧ)-QȝqWWw檧cCR՝DGkKtuuzKcYDx$&Mb5d} 5O7zm X.QCtx̉2ĒD]ڿSX<@gooDdkf 19n=C~dl\Jғat,A΃M6S$J$r~Eor`|nz](xO$T*o'IB) C.="K,)B%r8 ")uhA$ht0tLBHJ'=I;ƫdJ-b$T71 ["QY݂Hkx)QH:&B$ɾD*Q;I|җ?iU$ MZ'cZq #ܧj+Y.M_'eEJaOC9 L,YdS}:y"ђcIKI}jU;U7N!7ڸoQ_1]m?bH|@e2j1xͻ"5C5-JVR w'42+mV~z(} d ѝv(ڝ Cg!_-4sA]!»g,RcnmSzBGi{P㜾$PףPNF+ͪЏz}c!AL`UۇYxNDH8g)b)Aط+sYns>jP{sCsNGuL0ر"sZXp~Z1!kMJl|okgv?v}NCiO?'f+qW9!f3Fm?'b9;L9&BmkтsZ }3Ω=3SY$ B^rÕDîűVwf<,-Y1,H?n֔YmBFX>8wϪ(Cf_awMZxbWnciX;׵}o Y K89)Bďcو 1uZ]?s%sz ex6c ɼ`wei$$As䩏(iod2@Jk~q[cJ1P!’H۩b kk;\>0J> !Z/AƋ < 9]B+G1q_M!@]~,wk4c3,|@%F zQ1^a-&&7ۂ|fiPvalA`kh(~ß{gAr??~^|j W~S>xG.h|KFIQׯ~ ;/~XMܾu8aσs;~ 69|dGwK}0נG1& +1ttp{C HiTD2AafX4ê$z!Ds$3Y-"qn]z¨Ьߟ^ή簘ڶ^V(23i/z>",SşIqJr]ie1hlGfrQ=o Tb@E\%w ߫)jΦ15m2KP$wH"bJSt`A`g=&4P!nA iN`pT3(aTrے&6$-&GZj=\B8! )V;A@)Hcd(K5'BѬSg-bjH`*dsLJJE`\0I|7csm  c\ bj7JÍf͍`:`,cl6G9 26m1SҨ!Ro 6 H^m¬Z|5k^:8+!SUB-f׃A;cOh#B1"ablm svLg˘R<ƱW_͘&c5H(g!/ނ/_̪ |-Bx}Tb aZ$>+<(ݛ"Bitgu!/)Rn=+U,rюU4I(;yOmtiGʃ:z&֭kdg-к֭ hNO5ܲnrSn<:hb"4Ѭ3Lh]ֆU`;!%px;&vv~fO x]\Wdj}*bA s] vz>e :+w4jz46GطT*7}|}雷0yƁ 0_Y)觬ymmߍTػvA!7/ fi5B;fV|3n%;k W]V.rzrn<]/.2? \&*!g|NȊcT*?7Npz ]}?F*Y*_ӹ*߼SZ^Na]ٲc/x1K{XtJv;fT+:[|/'l!4lvҌXw9nTIzƣ2tPFt6 ,"\v$?hh]<\\7GVg,3etXFDvN;kL$O)|$TH]R>ɅB8Ή )ar1L 0s3t1O B'HiПZ|++? R>M0G ^^!ĝc0H |Cyd 2yk FG*2oIh&9?'Y HmX|*]'q!frG#0&Xc3cf6!Ūb[c]#i謁uDX{LðUWՂ: wjVj6t xr v ,ciJHҠx .ʹ Vj̈́JC"H W!& M* šFPTiFN3-P%iK`0U;4!!߸5h7P7hҠR5jjoyZh>Z H7.!2%FLRGa:9UD't:ڭE=i)3[#ZU5!!߸ɔD]h7BiPEtBרݮ"ykM#ZU5!!߸ɔ$qj7x[( Nu֎S-Ѫڭ E4HYȁvZל=HJ]ר5{[݂<{ Eb=A9㥼7Q~`/ʼ ([ć`/~-7eSK[zU ٲHH`jSS)"[\B\&֓Ѫϯ`_:CxȴR Pq-Xq JTXg)։*\( _Cz1%s&Pⰱച  \S|o-Ӌ_e;qH{ xog}~UGT?()߯ iYP ݞk[P6R)k{a-\`IBLNV0 RJ@:= hq2ˤ)5O$4v|)۹mQv](C/o(7]Vlt {obƛ$ q_"Y7LƳ\]Km*Tljo?PZA;$z-A[Sj]Cz"UJ Jk>slްg]K6hh˗:Ty@d<6*~DQ^,]~I^˔'U_.2Ifl#Kv!]=C*jx}V@!is[J§q#`\`ԛVTB^c`kiL"MT&Bk䀤ymD^'1YT2%p(q>;0]!T`?$D3z[|n,Pì)=6a/ۏC־` :_Ԣz]i( ]N+V #W*v0mBBq\$%=l>י8ePvL$Q9]Nlty6C8}}>P뜆颒N>!NjEMԊ>n8'4MVHT߼݇+|}K:@׈@@`=]iP0z"*q8qP0RGX!V1iDA&G%D+WG:U~杮nUF#q\6*hLW8Q q. j/S7k\H`@rxDm WB Zo G+!&@X-Ә^J3b뾴2Q[-}ڹ#˟{7˛:acVmpo;N}"{}9͆~D;JIhρi 8_]wg MG_;Q_oNњw~]ة_6w3sÆvwS<>. ǹoax.[=|ʿ݇=8nnyݶJn4q;9A=$b'{(fF#άi)4q'g\@H&$Z !# )px{N&.4pTPᮐ_&~|W+<:2-#Ǻ@ v0J<1ų|V޽yDzΉ:ʬp&Hy^ow)̶$lb\̕{vwFoW-my݀_yw/$!0th@Rg> JLTs(U4KL 祈:u7V S#ijsJ3R);j0}XN~mY*kYLׯ!d_vi!!dŻg7A#d{dm{=Z| 53CgjagfCPۙZb ~8+r4Du3_ |f&6#lO=+yu(')==ZxPgsOO"ZL12!wbM-I:|o2w_~.r|cR#\h| ),8n,`-qڃu?I=J*BV3iI-̗ՙ?*γtG> f:Ͼ_9LSZT1?Ppxa#0nj 4d=():X%R*d:xL61^p ֭ {z@e3mfF&x=%[ @4r`[ K_OCK MpFLpe$Hi"BfH)6 Z (aвA i,A'.Hij Kа9(sV [As^!f!qj;UC4ZKVG=){OC9˃u\k <»%h'B/f͠5^vBA5P0HJ",:XP`xj`Ϭ@+!j&~0/|V`IdR t> 9aD$biajTKS@3_# vbtތ 2 ,S1(>UIA-J1KR$ɇ&2Te"imM@Eϙu}͸RViT@;0#/|RdfVI!%-PPPQb)QIHL[7f#_Q:(W\^6M12>fcnͻ.+xyJBYHmXRf%RRө}=%߉aEĊI1b p؆٬@$$5K>·s(<݃k"Qƅ,? wE bjh4(=N7#6g7n^1Nf}K>B"Brb#`zꄗ<16SbJk jȫR'tn* Dy;$ `rS*5jw`,f+?Zh Q(TWIs2ov1f `8Ne´Q;xY\õޘc -fe0#0LҖRg[# 0! 8v)Z.?,:0ofAs嵑$XđPimpuxpgZ.ޏr<9qlL=ݿ%O}!p? [kp?{57My})eE0P2x _9\t3~:E$FT`ҭTX~ÿq#׮UQ^ Ah,vG](8*y' 3((QK a,w8Xe/PT YsA>NIO6q҉UIܘ}tdEtAAs*T/vq0Οf%Y?)^=tM/@P%~3RS;_Akoʵ7?H+fR(Q)悇$ P "d;([0ٲUj-_@$~?_i # p{_8HF6t9Q)e3+rHv2Cx)$JF<8ǐGL'i55ĒHٹ.z 2B(`KPWuE`K>HֽXb(Ĉ+ê:80$ܜ/C8zؗ%!h ]@HK_Ԁ] &kӏgbdBtDEJ8Y["_Lߜ"1://Ӆ>?|92J#KFbhT=='."p $Hj|GiԹ+2'N$ǥg5(':D\Mc{٬ssQ -s)~H-њ: bN=>~@ $m}~]Fd%ReZf4 CŸ H꼦 *a$j63Ie߆JwֲdCdmIOlF0ƗM@)eiI^o I5!顢e`p]U]]]'u4p# y7KE|ܟv1 HyN/ =FsQ룊"ӚSց +`@CSF=;czKgMl=Pk@ C»U,Wj8~\c:u' 1 # Vr|yh2 qܵDZ=H$<$v(XeǡS()e]k2~=d*|Brم7UgR }FnlTShzȎ VRIcnYQ+/15۔^χLC"Ex a4!͗s.R!Ԯڋ(,3-,<+r5 0EGBˡeO,=b)f 8Aր2½V{ib bQz_W9&b(!Г V:PU.%> c:X xuTvͨ?$*Xm')Q *,>ìO#IOc^~N}uC>1:O#w  3 we 4u:Ioi`\9jXrEΓm,E_XxMnt>}Niip"/~ZVƼL>vTs=@R?hTD ]*Na?TU:j< e֊+#!rGl lv`2`z xkb"^c}w I<Lb:HMw̳#2QQ`l T (vۓɴWC#w5+jQv1kثUR45=m›}s8琺ͩipZb q[ xb7mCOeƏWګO50aSM265qwYW{7#6BpHΊ|g02]Rveu$\X0U}.v : (w2Rņ|8w8T5)TG6VjjG"Ĵx e%d\^J bW;`%!f~m6x"H2܎P]EM]{8.$91m0z krP6k_RBաҲ 0rY#hMjܨf\VlܤpXn 0f1ʌ-eS'^g4We`\ğ0:nɍ2.~'[=`$e:~x<ְUŐ-bUR9xP]fڧ)IyD/ۭړZsz8o^erhk.[r@k, gA̹0]ڢY$!I, -oVA@]ߔޔ#ze~YgP+- oRƁg\@dLXȐ 4$d~khv NlO*fƹkC?-5_^+ajz}MPWA6ɠ ,@F R)- A2H0a47bXښ(0+E??TP(b_0KYsTP04@/$0wMhe蒄*C$Mqf4R(֧7jaťt~{20XUz#%NpAi/ 'I4}ȴlzHVrUiቺM'jۺG6 PpJOu<& ?řv1FI6~Էq``1cLM<̈&?0Z.PLO2b OXU*'!0'Il(lTes\nXEc InhyVCT '3o*Xq\Y- 0ڝc@`UöPl ?fUh?Ma(K IOv " B=R i4\_:̆X,):74#F17){= 띾gLGHu OX046 }qBO}%&kfۮ'p-Fk|~cOΧ̓?|8x|qzX5Hco:|}ZՙMn1T~'ïKeldO@Yu8]&@H8B(2R$2F?dS2_e31b(VPF<1RI>wH=3C{dJ3{zn.uiMRݵkAcD*|z$e[I;цἌvVl:P6K;:A&/]ʹb4}Tف9^,x:lfFm]MR TݬZ8$(<ʀSX"~bfR\bIɸgT,R,8h5Ջ%(''ݮ?2#$@(H E!=]N zZꡭ1_;)'z- ˳?] ݉ѳ[ewAJ:wgTP:^`þwv%Ohga[cnUmޱH= ͞2o6U<# ŅMyF㬗.>1b^gǫGRblb'PkC9;8v6#H28Hv>:V}~OT-:] CXtЅd/g[x_;."⭉ŏՊ0&뙷Vަ_f> rD`uV'ڠVZSX.c Uy_. 5@sWֹɁhU5L\މunÉ~oAh{*2P)PwnSH/#5g@.znQN|]<8`$ w?}(DPؤ1VԲ x; mr8V7&F o/~oԻ.>ˋ#> k9Ԃ1k3TQP$*P%{vF7a0}uyi3L A"HP0"BD P! 9G\"F((uwWsl*}>X Ȍ4TKA^);I K!!{R&5)}HzYo:}m{j6 'q, A P$N$@+,p-_ѦUF\ lG&HF}ޱ:TN}9+5Y${|2  7o,Gw 4ɘdOsiCe͑$Hî ( YkhŮC#wY2<8.ȋwhK&}4|<.A` ށj#p1>o  *~Gh\5@Q+IkXDp1bD_8(ADă(BLaG!~ 0ZhD1lM[9p0(j[pr }ekߢ.dBB)+2 +gf֐38мVkC̤̮H/A-!mPLַ^/5P/ SWjE'0{Ғde+B aбWD]j+͂)]FfH|! ;5q@A@gqKo'N^k@0Pvj,wRxr9I:⦵כ)/{+?“nÓnʢ.w~t .NX +}gBDg)-eԏj˜9Z-i5͇Rs`s)} %Y8{Лz.^}1wY},@ke:=4lF jA_d@T8 GڂelF\',RRo)qj5rQ1>1+yZc>zT8O7ݔ~ kZ ^/p{;4E]M?=_^htEq|׺,'3Hr oozg=#RmA:=< & ]ə_P39తη4'*Eh:McVlN}rD^Pg;x]Ie@&C3ѻAQbg"{onJfjiEĥAAe;aVڽt]J!3|~՟^VΘָ:]X#Mc9.G7 c]N,c'5':WE?{:GA<4{؎5M܊m@v\dQicjiXMjFm?fΈsOJb})ا8EƂl7;;|y;–ˏcݥzHSxhM~"apFatCH@8U܄b^My*szCV͈1|H-ef=^z74ii' 0 "q@Ā"ొQB!t4{'oK=L `tCH@E@D,hHq dv1cpHBs 0 ZXdeDB&b%\"^qYe$%}M)íj rbh%?׺p955Pw&QP H,@CdBK,CSFÈLxB=>Poy"(L @Xk}r7jr x4^)I]]rw/RS zdk4^;d $]QFw} BuSmo,9V<] ֧v=_87ye- invɷϜS-㲴s[NܭH-dCei.Wl3uL[KMvT#fwv/J/pBEkv{h8P9,(@ɗQfPk9+txZZn$MS!Sc>HK *$JU 92FQT۵)#TpOʊȇx샄3S,5+F1ޮXE p#c4%Nr |(V6gرqH)0052% LeAkRPb)+ 脤Op%eP-Qfn߀2N+M@(Ucyma,`3c-ދƬ %%`iL- /Da-.kE,(Hݱ >BLJДBА>AUO&;-Vbg6>qVK_4PvQYCCq@-֚5Ә5)#gS$hKTAXV3#tS#aB0g yZ1vE٬Yw15{r}2%@žؽٛHic$J,Fm'4>=sbH.e \;q"ꛪ'Wq[ZI_Ƅ+&߻9h]KT1{ZhR2h})>d.1hX0tSS;!2Qآ5DК8&hn{K/]^1nѢ L8q閘t 68B2ݡgHq$ox\;G48ՎMj8WD2BB/L]L\xhng֭4EJ%[s%6YO_ޙD/\#[ݶwZT +v:4Å5ťXW$!"ICD@TϿV `MIT4&k8U4&WM*崗<=>Q3}G[ TX&p8mG:*:?< H>E #R&B @<ڣD@?o;z< Bd_޾S:'M]#Ы \ՙcڈ8&L?bF" w˵&5ź2MYqaIq6NQ*h6boc]MVdwWad7akR5H^ͫ폟vU>]݉2_NfffXQδIfqJLEBR%)icL PfQiEۥ)F")Z57Z^X<ID4KPA k&ZLlQjC̟֜⳺$x@~ߞpdw^G;$]B峗:3DOǢG- 1~Hl7!+Y)A6^aC-Ócn ^@.<`x&\;&cŠW~ )9q(1[Tqqm#O^ ̸XI_lkIQaSx([:1~. Zr VW#S~I1~ CDޑpLq,y8|n%,谌Q&"2AW8ŇhJb+1C˹(:?42Vc|C%`-8УF)ц%uVz;݅W3]{LJcI> 7W/>4}WUz]' KN[|)wq;(໻@<(9jRQ!V=JQ&{% J^k&n4a9XރYg[MXN5 踜v YɐX[ hUڔRyX4ud*vmx5Dp:"Smpl1@S!{V%:d)豩c@W#on_7BC4tiQ$zitI*Y/P8FghH4j8{iӈK#Q)0$-v~ ƬOvQ<< `|F`q0& 2^9/h/o|zl"_;*K$\"孒zi_+&h6!&ʜS5iZJkQqVM,)fv+cr+NJAgۉ~Ɨ70)nh[)]Hr\Gĩ SH -P Tm,xvǽ5Y~` "^ LߝyC !<6R0KZnJdFL{9v<ΰB q,gL^`w9fÞJbK*YUnahx7ˉlo:6vNj> j˫oi?y~B1h2I9 di*UtBiy{k5@8"&,p6b pj~ouhZj -Wogaa~ó fbrt csV)"KSӜD CtNH7T,)kI\)Xm'v]ZoNwe_s4]C":g-+; o,%_6-( y+^Y滏~ڭto."ճflp_*c .rJjp&H% Ř!`r&hłKP=zE@a=CcG(s}~gm-VȝWij~^I6YGb6֮[ҙ9AbM4_;iv|] >ɰA(E3C+εJ A4CF3\ PS&1F >]Kɛ `1˭d&Mz9yWMuUslѬnY9YޘʠN7ix鐞-]/}xo,h }F ԓ|گV˩6WQ{GGz2J`ws{ټ[miQl!?i\5w!d_ޙD/wvk6OДt 2v~j3td$~uqS΂Zi,Vbd_=iθGMfIc*5 8fɋۏ. (:2jW1)ƉMjgEV(9m)!G"@i*` 02 +ENʏJ<1wP~RBd!09<Ms!4h&M+S&Qf8gDiH (*K  |1"5`<[T15k0nB/%.vRf)]1*)_sm}p%Jy.8bέ4ˇv0-&ߔY$3)B LvѰ ~h(S MC(- fD97溜kU-Ji^@9ʄJrLeB&YEʈȅ-!J UN S' ]H,%1}.VX<>@1ygM+&Gi؈3DW&pV*J$YtEM110. qQ)GEfPcLb+,* cS`N(58(kvĤZ_vU(%b kl"1Jƻ/r6“̠DY3cՋ"MRcˆTi'<8"AɉH)FZ),I*D :p GET/)bXTKI XFKGp#F`9PD/pf"*bhЈb0 =2fMk!/Җ l OwpuD^y貙U.X68pZZ8["Rٲi@Nkف5fDFM!=4a:w7֔_Eӓ j,Z6rH|ьot(3ޤ0r7 Ŕf78 (H78Ѩj7Q(aZК;d߇8]^ŗur.|g1Wt?ݨT]}Pf rnGYjZWn'gsN+g3֠׷t5"c~sOqpK*Ϧ< +h𧰮rs'y,ZSz߮t ]o>t+A딮FMҧM+W΢E]Z-4` /~L|q6yJBqOi( /^3c+R&}η r u kesQ˞=^=c (BvcL # ?Z_wb#g=<E)݊ή<=:ocJ]§O܉~D7rjs^@69ohhxMt0YY"9/iԼK4jBTs2Vr.2VrWx!j屡SYn4gf&Gs$m~YRa9 )h(|\f[6qwPx!(ǎ|ZV>sdxDy7Y2bv9za9kO,>`͐1F|tox`ބ,{/5?$rd-<1x8gu^²c8SQutzbdt;t*֭MEq]k,.;J"o/xu [L$q,bH sۺH(%Sϗp3Po$x+H/` !8ı  #뭦.52ip:sd[0_T0z [+yIUC0Zc6k W7ތ&c% t,@GH-tr$:aAb]½pbbtxΜqSMZSst "pL<朡;˰DGc$2L[ &TX+ 4A2r0Z VU(G^]o''I477'۾{Oz:~# S8B,pɺS#P8^Jv ^;N^3N =#8cP&Di4rfX$ 0rY*&مv^y/!Vt{wnR=xA%#>wo}Or$AOjT"6yy "/[oN|uJ^\~՟B՟w.\m/_{XJIFx6^\3"Zq0>4+>h'eT A &AS[agy*:HauF4sxY]K nS]#Q Z2ػcPL 0Ci ˅ ֈ1tto'0\=v zP:]%n6V p'L`# /s9bz}okQ %CP,Xr>/Qbk82~] l㟻(YxDjLȞϫ} 7<^3yOrSxM4h]Ho^|_$s*K)2۳Z{_ѽd"UQ۶I$%kkFee܄_ o?| 녹9岪 }Z OdɓÒet E>rHc䋈d!QFa$jHeqUt~^"wٍ$#p$MN~ar +^OJדRMO$ x }'I(Q/JJEkcd%w :*PQd6 j>,fS9}jFc11PKq9eSү?_$j K/$$=CE=@pN<|n ɚ,irL4{VT &L~14)B5EXݻhsf7l^ҁFg6+Um$W7: 6URo.f/Ǫ".W6kX_N}j]tki4s>s[߰:Z|;ظ, BF8 uz΋ߊ \#n󏋋/ FӰƳ'TlpBuEơϓ8k q{(,,6%޴qo;U9ڱXT5*ɸpo7Ԛǻ4:u)=cM u }'?jz8Az5hއ)QC+qk[A#K}`>]($JZܒQ'D*ZIM kH=,֧kf,+K{<X:Efcyu"19HQoc9o·e4 2>I "p}b>dG`zTHm+AW(hz-(V 02A`(BOlǝr0^i_|g@=WhM̶ OykWn.aw?. c/B/_% jYr7m6ˈP#@ p+@CH TҪ3cQ<ۀPDNSS(la iOJ/׌ F@~W'l׸sXs95pBR?Nc\ϜDY'>s 5Rzj-iC(]ua@|҇S^nsIY@)a'^XY,ætu;@yVhgIiL9P =Py(OR!=D"yRMDx#aPs6xxƒ˧ed`6ۈwHJL*~I)ϒ?KJI~ } ֩(P.9>Z*$sK8^Y_YD$^ Xg! ~h% יQ,T$vFz1f^=2:e0\yXƄgHjdt7@L=o&`zݱ}16(Yp20UFkeYis$U+lъ 5yR8!'4jO#8յcVtKc 99mf)" Atꇋ8CT&Oñ5Oe9\iv} D5 H`/R"i8.ss[#B,Rk&LVΜEĕn+`B%DnZDj58gk0B\i} {MKPhif+3R?l9Q#Vy\k i{˽rs(yl#\J:r/Ow"cFE~ۧ_4| i>/o~_t{``wW0J'| S ޮq28Wx(r(O=hk U!%B1|1/i1|Jܞ[$S% VI 0ܾ#-82Y>O#*uʶT)_ KH)(8.$-Zy(˜CYİ` Ljypɱ Oa2b!G97`!4 rgB"$ C$(7f_G0G8-kwC梇qxi P7zH2*LÄ %Y0@*+Jh@J $-@XH0s R&^ȇ9B I)$s5\7j m=%TsXQiu:r?ocJEtB4=h4j*IP?Js0Z)̺۱8k5`JWMdA P($r`ȸBad ->8.ot|>9=3 Q,fE~O+XsY:{ԛbc]!(m!hFV J&9xo-şog y|3/?|;1?썪Qr{O&半ɤ|Q//s6UрEXLǻ[Q]Q Z,ч1Ow桡6F7dG2RUVՓ¯Gl5O{'14TQS_,cD\u$[sߋH_Df焾b8[1#Us~R=󜗷Uk;FiF*YwŴlpRYdF9c)A;V9)8RBssBMA 9/t'E,[*L7*:g픒x\={~Y=:.$,J6^'NV{jUAT >N `$j Ɩb') IgI}P+?ڬ`$ zoΌsķ-G|i&M UcI^F7 f,z[%M)+O<,XΝjMP;b,0b+֍Jݰt5ĭdj >fpźK_;0ō6 ׃ ^mw|o_<`Hș[)hKtc=eid|vkWybS׾9/tw=zZyUvu:]fe60#>-|w7VRz5E/33eySFaџNnan;j=ih 7d4x g0*}]Bl <)ٷߧudfgRIk/Y!/qͻoPn==_y_o?l-Fi|^O羆êo%u1/YՃ~/i5*h=6WګōW_R L1"Cu*W#馅/}>ߺ,n_k 9i c0B-{_aN(:WYHË{"TBܳ5neј\/j^V"I0ih:{) quΚI..?^}rճ2-1nP\ribBo͎_Î$M=<^W61wDž[8KlM\J?\߅o|~ "Xٗ(g5zZ:^@H%, $Tˊ S:2!?7da( s5T`})7I9CaE w=UUuDtuKS%;J0i>D7JId=+1VXrҋ,FV(_fe⯓_'Z|x B8Ʃ%#V!J ea{Q%I jg2ZqѴ}ѦH" yÑ;ŧ1Cf0lHkN'(]5& &`cUD#%,Q`Qe1>FI{mܹL8tJ*^+f&I:G|`Zz-Ћx7RQ`xy]bQY(5ʦ3R,J1A,U25gH #L[l7(19Z!FY->Q2n) #CӹLh%b% ?Z$~8?DŽjsXY%˻|}cp폫M,_j?r`;Z@;S m daquH5 ς}3Nr]5_i6l4_9AikgN10KvQdEuWg..GuQyvt3Չ)6M٫Wc0^CP5+k?C~Tճ (1@tk^;W.z5= jӈjlJՃ+~QѦ#Jv{UuPZ35֫&ԯLJEtD{oHh5c[%$Xk !$P{q`%z0w||ֵڌZk5k ՄSad*UWbAI"#SIL?TNŕlͯ60]]\F.Rvag7}etni7xk߾]cڒ[~w4ˏ dio;2-M6=nO- gu^ZO|>),*\nB&eS$ӦJu9z^zA(Y}-Z3ϧ^ziOt}ꥑ.(꘣vgm<{_JUBDJ>*suheu>C Xh4F#;(!ƺhR!J9#83 JJij'П!M&0Jty+i᫓%c5Xm9̤1NV&ӭqط Iw+EGrQZO@mUDDw4$ ViB*a4::aFb /}KP:nE')H.FZr&Ԍ1S CubG>rȹ`?!ዛs=ɜrǒO7αL9din 87%#cCVJ`~9%<Q%m4r`1jM@W\6uӆ SR!|;inJ3LZ 8|7YS$4I2tk?i #6~hfSɫSk?q_i "ͭtLQ)R|P)-5YIa:h8wP]cde;Puo{:`u9N\@:Xd lYUtNaq6?l]vnGz+{ELzcLw׉q Q^BH#lxĄFR#Gb[׎yk,cty~ݵ=F ^q4jobi}kQ3×X!dmZCXݠۃLPߠhH1~ f:JF.޵6rc"iܼ_tv`w'Lŋ[H;=dU%JK"}Ѳ0FRE',X`e+EI3p}vT!I`YsJpjz"Fo#zQSʁ Zۥo`$!Aښ:MMa7IbT3I>(8}c,UI| JSns&J/q|;]-{{îJxM.(3dѡFB *˜EV_#is,xDžEJ)/Rj7U#*8>6L+_h#g1b) pqhaaVq@00%T$A2$LZ`%fJX7V ^d`1|T?I+SEQ I Q(+-J:m).X&F}L?lwc2M ٧Oڒ"i/3yto9k᳅#_fvo!Dw)40 hX/j J[ =.."}&my%AKP RpmO ()x}HIx}@ofi7/d3y{{|Z =")tWǻ5Aj}G4AǯeңK@tAz|sS H|3LG5BzFZ1c/ȕ۷EN61L#/>IO#/ٸi cQc],QG_q>U0ٲL¯\7^>;W.b9Y׎szlvٟ~6.|vl8Ep8~yޥ0rVPLr2*je؅ޏvT/u<ޯC,M-,FlEJE~ `/)llh^gE鶬itx;*ae47 #~<0ZD2 ^ݛɗ`vo&r/΢tك_]KaPpļ]~{#oGS;~|o옋4^Kqx](t.e'imF>>;ٲ|m϶.-:щ("{(q9\@71׶ѰsXO%o~mn>0lcGjn{=>ԍzJMYF& ۩Wm XL0ƙϱ͓8Ԙ&uVf{MLQ̾ޏG_{w!Vc`tOKڻ 1C ^dFo0se/g =Ya{֬n*=F9THޜ$}ˣӻFD]!/|u^6߉2{ܚZgoݨR~FONNtNo'~ldJzh-;\4߉Bo=iR۝3^ t3{. Zq,^,pvu\<ޅ!HǢZ#ѧ7Y*pFݾнn44 :Hew»^C.m W`/4<_g`麲ܠ_?@}wtp0'%{zM:kZř)).mG[>mvFQ^V(j槣OՀ}>7ƻqaC 2ͰH5)vגP #N E#I`6+^r}MrrS]%Nv8#'9y`uj9[#L Dګ!Gk[jT㝣dCvןנcCmJ8Vڍzg/n%#Z*2VJ\{;rRۗl##xrpjr%/Z3TKm3tPNcgq Q\WwNDЎ6#Lz8+_Ѻd<+al0gx|" AvwҜM]?€Y30t<+@cLY`$rF쵖z଀4YI# v3+,%9Btq?f *<~Y͌{YѸx9(]o%+ {=̫/-% GvJi2BCx3M.Bt%$ Gl&.I[s{7^ fa(֕j$3D>6A>&>ל(ur{-4-k:rZz/tocߓ/V `So6doK ?w__|q3| %Nb<$ BTZ ZoeM|x |j]4)D@? .2~iIbh;ǯ&/Ls+f#^_L^+f'c*HN0;npr&=Źбjpsl9(uutlNe49 TC瘻=+Y΁\(稭8Cu^qεZt#nխT @tZ['+84W|!>4/~vi_E & >W;G ~vM 64?$e:"'A2svA 9<Úpv=zt4g$sv5>Yt 2Hdo+/n9i5Y.t#ʲ >_ݿDMl]H@MM4g~]I$q,U 9-=c% 1X"Xr l N]4qΞ}BOFyn޵Xۛސ!BQy`ƃ[Yss( m2Qgbe5<ʄ^ ZI$P ȂP8P02XcxQ?R/=ciL_y;a:<]|4 zw1m;g2}s~'Qq2[No`<>}fb*/ޗe4aM5_/?Hmc8ngl3Rcܫ][2\>ߟ'=ȴ[rL;>/s9{Si0 D2mLC3ħhҜ@B0u0 "T+-2d+\+knŗ)싫5JbWgTh,mTȲIMR Ds:q" pg1IM*KnT96n{$RU|-yU[E4E4ҺQ/ Gf5bQpmDS0:"΀<͓ /=Gp35kݘ(2&^/Z_3:oݚxW'=v/?`{WA,&cL#2reCHF^ڪTG*J7h0!Fz \Zl^cO8b!] 撓`cc.$$~E<hFt`Ig.0G$і'* BCȹan Kh4ങ^Pfr .$%9 Ay!,=ytm#$:1 ;- !PD$ V:SBr*P\~,620OF tc_sҸTڏ܈Hy`xsU=\wX(Z0*xŷ~6ADdgDzۋ6Dx'DmxgşBϋ?h|6W`د~> |s!a}v$vUeɌիcȘJR5_VQ~X\&l@ǥ=-lYU9"Z#ϵ|CF,"8b40 ; "6LI,݀E`O!ؖoHQ__LkDŽ|c~k/ւ޵ }Xv kLZQ;y149{XfZZCPlyisŴwU2a-,/V\e9p@@E6=*P@ H5h't]>*Xso%!2"-$r"I(W܄`5d,vJ8qy@]_CVMvl+9ը'#+_Fz?|wmoz3N}J`Xl@D5gq<72n(2!hwzPP|o%)юu.u*pJ-Cz)Pi0ǭfjW,nq獷C?>j~oe/}v> ?nӕM!eNe?{'Vnezج,\_33 ʧŝȯ>G6+?Ǒs";4 D<߽w_̵>AugT9xŒI2<*4@ lAgAݿ^FL=NMhp̍FfNlXV vb3ka?>|YiW4X_q#!a=VgƍV;bGgwƗ+')kuY4jY" fi&&5<,b5s1={Ny4XO7U}!O$*Ju#ZZ/T䅪 `m3$l.ђ;8%\B\9mm';l5=Hn= 4OUd|[>pz,ߋ#V!ŝ q%Q_|2U= ~(QL/^t6v9+%?GsHQB@ YU︚7?!*2WυOԉLg-\oW~JK|A:*x3RccuϡV^US\JP&oʘES) QfsˤT_`&x*PH@YSd\ 6pd7b T0.B{;`%G$h 4ziˣ舅d3L!ɪݞ6$DpN%G=HGD!b*2Wz5?k?v(rnW&2Zgj:ACHJ\X,$j@:GI'_ K\;)1-uGbV#aS;_| 8OYp!pޅ[4~܊K"2:b{VyⷦJ(1X,2)9 Rgc}OȾ$떳j$-7H.$}'Ra3r!%w}>I"qN RUEvݪ'ld.{5x/@2i)X!Xg\x͍m+%mk21? 3D ]RqY 1B3K00`s,hcXV*q8>>3JuK==(? qfT5ȋ]!3WH1yqkss;M6ΨQI2TȼeqbXSeJYF+2)KjV@~$X+4k .ڃLLY/d-s7YX|@]*m)+7d!7q:1f{V{^{/>d\gTLB[D4]e)mZ3/4KnDcq׳&> }_,YW@<>bstWߌ~} z^}R=`O0Z<<-.淋%NFI.Y!~33s3ZE;k?$GxcjP>rj޲ԇbzk%].VTkԬ=EdLhTHΈ@cDbD ˰ީT{` O7Ē!9 ͐3#ؼ |a4pYoȾО&KλPkV]ױ:/I"9c+g FtSz1vj(4d, cc"5Wo%~E+ 2d6j,9G L{ͯ4'i3^?QC"Aw7~ʙ`cZotxj:CL_=GhzJ8 P^9ERJIV!+.]gLS|4gLS|ƴQ-^"aRItE4q#+56XD]/mԪoۇ,I:3?&<0{lJ:[Ce>j%=U3BP3HR+V]٫m8;O@m?ejSZqPlyЄM:7磦D(LP(cƔjtp29+% g"VL" I:&PT7I~ sJ"Vݢ4Zn #AQA) >6]h** K5uqk/ZlmQ0gK+Rl!j G!HsYgA`E ?{k="y->pܑ|KxflGź#?}Նl"AzY,m|`3Ft_M0ؗ-'_ioIOiYσ$VxF I})S3oΐ8oZ)sa ƙWD3 M4|7$m-7) d}+F`GE٥Y+ r73:u͛˰p ˥K&n؞ȜPbQ[3xIHEy=n$l=aDӑ@#EA;6BIVIS=|!+dn"KŻ@?G8rEI]JZż\_~}[ ;YyICU~@F^_6f%EE,R?]rHm,j"mS: >A /8Vh/0w a^>Jv7%y7n{ uC>O q/N\يE剱V6{\28og0.ȉWs֌8D*~bDnun= qSGcqc_ģc"%~S1Ҧl]%٬|F%QɨhܴPFCJwQ< c`o@es^aa8ҎWX$i /YzCuHָE&fW Bh\U3WOT:,D8(Y (Q*9oSJ4 UFY9fA9YQlB мb4]߬WnwRm1 Bo@1zSiᛜEQ(151*)yeLpۍ3I% 5\ Wڧ=k. +-׸FP{IƱajƆp)cIt- (R•ACRFQ%zF͘ZfԼR" G֨1x'v*UZMˤṘFϯ.A:W_{GwO7|e5V_ߕliAg#Ozncɬ),`>݂xKS\wl_,PjXXPLf!0)hR8|X&#il,k@镣vKcm43j,#'+]EhcU̹s]KB9^ooR- (ip*+;Jq 2"a$RHG+45–Jwh jnDxDXݚ2bQ>Na:56!<'T 5hDel A !Vk0_0l:R(4Uy56s{ʬ'`XJI=.d͢9y2;9}r'r:r#tqvyy&j٩-Xrpmt\fS}wִ+.;O!nS׿kj6_9: q_NPҗq}2uJzv-&1:o}ʲt՟w~`E3`*- 1h y$&RA5rT9ΰRAFƝXמN|i Ԣ \ Vix{LsזimhSn2Try?AW P~}Dw_bRIʶoئ_2 0ۇ #ȱtȗ e=A'ƈw2@PS8O3R#).Iy$Q EX~;rgZ_pʄy8{\̿vW͹[[[AhOHI!(sAPZ]*.$HӐ!*6,͍?~y\_WC, 2Mv \)uz;v'ˈ.O*5:ȈDh23z)RR;%iKXǶ"t RLJhi0Vl`mb5WR/1o> )(+¿Rp@Y;MX昵!|L8鰐&NEw9z%ӫ/Ŵ8Wxcz^d?@+m#IXRgngf l ii,[Zj}"T*UE6`,eE~WfFuq+\ez6wgDa"ȴzD~ >3Ǩﱩ$t~xӛY֣k77ڧp#~!.r*ko~Qz}뽳 ׄ4zCtٝW =X 0{& Wr͐شhdFB๔2cgD$W]\Cɋkw.[kם>91wLi^0?F\8|ɗO^??,ѱ _E6`%_b43z\NFO7Zp T 3g| <9U HÞ+`FɍvZCWMC [;nOrrM_;@Jr`k@Kz]c} Nzs?9u4`IץγoHF!_mW6PxIԕM'b1Co)lݾgfL90㳿g٢J\e6k Y`xN}zĵ[P2}WXEQc;#tKPJD92PìG\qq>0TUjj/:\Us'H+gv?-M|Sˬzhv+]/[P:?oaMF{ݞ&8?к=Qqh3לx5oT:7tFF;*bfkqٗSMlU Vh;WD_L\)@J:|1ՆSx N>bҀheҊtY_D%G+OԤg(< S뤁`"6sZJ˗.cM(%_5 [NSˡ3j^nZf<`V.p\wqJe5[ ߿;MϮge%6 ^Әa[YJrY۲.Z)J,MI-H}vxXܤ|q/˯_b bvlɥ$he=LJ~F7wN]fgḡg{wNqףY>QfWD4R7yoo;L"\w^J/ !-VȞbͣhf5fZ6q1s`7JmyAO8̻}/̋JS_?pIm 0%Pn]w!T[4 Hn@4zqJ5Zg1(-] sƤTj1" As Q;OL01yZL5ND:J|dD |pHS2Q܂Ig*ۆIM>z 89o4܄&7&beIIiy7t:d8[xkIj/W)@ͽew#.OәD>//5L0j]luY1~]֛(ޞKP*ͷZwݴ,g-WYcSv,%ñ''lA~u㕑7 Vtl@wKsVyݶ~9`{E 3S#>+fWa;^g_?b+f@tf,c8ʲ}gs;3PT>ϩz6ɺAnhD4augaBvW _ZodV=o}> {yw'"?>~Dv/yU;l,.1fB ]/lت TݬҬ-\t ~\xP=ڮEmAm Or%Qi'0*q "J5f)0f,`WvfEל&cv22$!؁iB%Z3OĘ3Hl^ځ5g UF-[CI)lC̙o}ՑjT$J3;T]Sޖ{jbvx7Gl/AfS|cQZ .thW !|lv-()PfEq:"U}:xz_EO?6cA2!oqcm=M/Q4M0^l]>,jS@vuhIh{[O[voc~1ߍy/[+/7&yIZK/feMUa*Rˏ*x\$*wKWFU4&b4 fI(uGkkPzKߖ`Vi벦%om|oyAY7ket/F4ۖ#N7T)"2q=a[-<_*Q!@gG:BNUЁ)[+y6 E@od ǭ{SJ7jz|+nǘui#g=^0#60׵r퓄S3[h$M nO+NDo5LfKl{}3MBo3‘3 ensU-3M[`L|myYW9m(n.r}\m}J]}~7UaW:^='߳^&#b9CHI.OvKiNr%m"T39\]=jeǰ- rk[W Iăa﬘}XG(TU`{n<W[X49{Ш\F!)|.29s$7$E뭕IJF-,Zbf=c{l4AMLi"2Ƅ`qUcA_iiHa?ғ=Kw~-_be f$V!{ 2FsIA2%y(5K\`Nmnfӌ! L ݺ7̉XuR`E0Mhx9>"G ֓#L2m,?nYo\В]p0gK!)NBr%C@?$4TIJ4gЀAؤLS]d1I \w5{FzpbΓY .3qdaXW(1|ђVեD!x_#[(σUƏ2iΛZb1M%33d+ap΂sbPNkws?n?@}sKoEOL6?_}odz㸑_8dbOsdb@mmt$;KVόD{FF YbWO.G؂ 2V`|ӂ\=EY3WNjo?60sfVQ&.:0Gf>5evٿoXAGg p}yy[vimä+EMzw-=9#]X"Z`f<=Ӂ63 =8jig8/;m9uۦܙ$L{v2b)) d,a?JiH={yv2 G=fH`BgopTZ7eO1>= @[Iѓ'IGO:m0$%tdjWE&5 1MCPp#R XA.hK@U-#GJ y8eEvi 0Gg.iOc"et|zs{nivC]CKZxNpZǿI:9&#U?,UyiI~Z<mN:ykϙz9gp/ox'(]?:ȕ ׋ 81o3])f=_/.4gߟ^?+h94gK`O^.׊ GzEɯ]ې ~0ܾ{n9>q 3f5wSL! kQDx7 >(c#/_7.Ҽ5SS{CŬU{s?ȼTBeׂ(Pk IфYc?I#Wm]oaJ{r9ymE>Zf'.A_%o?Wn'-]~F+f/~?,6W]vMB{p\ӿ"z UB|L ,@mS(cagrh%hJeL e~&D 8+OHQ~IT*_1TJN"Ӕҕ] aNh%J0:>R1_X ԒޙpyL09.RiEl"o΂Iv"jՇIҰU߼:dNrb ˱{c-˺+3l-]ņIrX9G+'"s7@1DMX %]mIӹ%^`|5h}޷m:hIΗâ]9K Dyp7m\e)B89 /~1W h/J!_#{D5Js$֊ 1ʒO7r:x~4(.l^QcISeI@@8PU, 8y` $"J.Ū`ׇS~%K8/x F}^_^z )o)ZڄH舩f ץmGt^sd4iBDKpoq$]LA^R}fuq$u'j,H`ӐމH_Ѩ}Nt{0 ;eR u|Q)d(@p%}tK9qIi _~Z/Í+.G{pgp\1%8Em"GĦ.xUBa ނͦ RtDy5|挽X>6bqD%Cfv BaN*%EVLVyzslUzܨk8*z2PQ8, mqtZZPp7YE,kY3s{bUF>qkS\&2äl%4F $hܸ]~N!r>ET2u8x` &x8SY^12P5Q1468 D;WuJ)##Z@B3eZKf&*%(FY˅MTRH ½""0WdyMT<؜2NҹrŔD%>.tul.P$ReZޮy'M}x)@fF8p*_iE 8@Jޅh }xu%LYj,twoWu:1 kDE,ACV SYϊ1(P 65-HY-xq"Zgו&`ĺw{sFE65 (ZӮOF'n3LqF[ {$uiOOGPxR఻/>7,Am㥈KӽѺKf9ȷ?_ ]K,ez욃)4Y6WOI7_#ds"Pdg|Fk'es"[ߩ}w4)"\m_JzMZn8/gu= 0IR͓'I5O|>x6hN4߁ȁRB2o^ lZk<9Qc+GoC0FX;%J '/K4<Ճkfl\M8*2D޵cbe/_vݗv1̠Q,VJdٖd;A9-lY ']Ǐ\(wWU!aKyYW(CF5W,o BO{'Nk,Ƭ\Ýgس\JsȻw1Nc0]c 5E6{}Ilym'(t BlmΎܰ,2-n $oJ欛l釳9ŖZ)V_>sξߞHz0j苲䕏DBC*]QaA@޷/^sq:V{L &>*v`%K-hlqiT Ayj?W?ɡ.͎?-z5XCGp~/o  C{zZ97\tbY[>vD|"x9lsZQWfj)V][;y7-P._S\ՙ[1tJ ^m-PFbKC(@ =bGXFī>Z)S/[R76v/): ?q#@-^%?ۙ[wMu R*z iX5qã`-ՖQa76k7 GcqPaQF0F(+n ӛK37jXx7 :\E-7S*.%Qؔ6ݏ1I Z E =Ii*H*K>:khQ$M:t/z^zN=3W8۠=x6oU.qm]$o:t `Y:BPqJs+H02lq@h(kR~@F gP0:A 7:0T޸,YK1qT\ l7\]|-n<8=^v=\yK-7ĕ&hV:neMڑ=clmfM9N 9`H$: Ue*"e &5E A%=c\lt|^,7kO_Gf pñIs=o8~m(*9OaeW5Y@.X˲%x[ހ2<Z2m 1߀{d"rK唬]~?Xڬ| AJl^wAp@xKIqMui 8rԚEK@s,iໞ= >/)ZEE(#RhIaV+m53O^[R>]t'xVK|~\gʽ"OD*6WcǚNJJɡءxWwKȪ j@)uq7kQcӤC_Pm1IMp4KP1͑VS˻ jn! >T-Dkdh-W]Vb"Ed"ʘ* Q  ZThCu~D;Þ5:)u޴ jT'yyfڹ#a-ވCS=xKO4T [\ZnirpXm"fKv|9c^wݠ,Վ? ipY >:c6C5֫( |E@+?i8"UB$N R s'zɦr^Y!jƈLm 2ɞ1O6C^o6]f7 )/,M\'8bo\w'҂ii,-o,5aZԅ/$=c<ؾo zvZru)e8@1rwϠ5DBEC KhP4~R01Uke-f 5G;f]I/`FU"xy(5I!d]huA+WB`<j= $)jMqy$NrECވR=cl 5LO0dR{q\*PzMZ"Wsy~3~A׻Ԧ;Y\(\E݁3Ӯ\m 6qԸ3]~qA<= |jʏ(vur=l'jx3)Lw^&Fx9PR̙,89S;QD&ohbof8i'R:b2_LhX2-gJV?S>n ̛7(3Wu8=%}" \\\NAYރSaq:=$} l\QQdG|vhFy 'h\e&2]3ƖaMO^I;R)Ͳg)_^c3͊3p}N]uoT/̪1n&S^#{O#pT{pF5F!{mb/m*__e٭oDuӤ,\G$s_Kɉ 1/dm^})#y滼h,4ɟ(Sw.||@}JJKcUJ{{#@J t1I+.ݑ~IȡmM ~^a=1iUh}$5gوpD<Å4}zS\ ?Ϛźlg|iXϕ5=dyANel~}ףZ[(R pV eUU CLsOc3 gs}"JL]ICR+B,wVI>w&Zc^}PԄm]S{+K43IH(Y:τ5c9ГׄWi:Pa*)GF` #֠vj8V tZdQ^z+>T_M[URkN^Jyڠ6 3V J W P( xu6*R֚xnSw9֐Ǥ$WO2ܚ WϨ9}tw:SNU4Jp Pt, Lr jO%65PJgb >bx?ⲍbTc=, fRo64'ږ^$w^Dxl 6'(s`~/T2ɄJSV^*ew1h DbWd&7V"~zjh#}DXޝ3QdߨH$Pdr%!@53oHݱ͏c!xK32CӨsԆGKIw{}eI"w(7.ɏ +xrT6H_exK&iP$aee椨]h陧\8Ɩma-'wE,- vG1Gg^gL7L)XSh^~H X[p$XUKɬ{Z4&ud$s$qQj5;_"Ryjd9/DJC NjH'R >Uy2KUD ůVV[_@Q%e k$~ϟJ*^eB 3Ȕ"r`}вU XVzQ- K( 1< GKIc*S&v<@BiHnPvaRp4"pX;"AJaJaWQE R84D~!wo~\L۬R][L VdM CݼdYe-o,(1ɚz+2m`~µ'g 4%p ?]lQB\_ܛ:Y 뢬E(j'fwfa>AY߮~ .z}'g5`:__uy}'>/r[VFqv:Ρ%;D~;iQ[#5A i+Ml)<% KHb. {gc̜'e?w?#Wx̣ 4yi0I@I) #9t7cxϫ }\e` )a+FaYYW6I ܏3> i3w{!4qJtz Mb vgcsM,)};n{:;nv66w9ngokgs| ?mwί{w{ m?|;]HRɷY/dbp߶[[ֺ/x++6ˏMwq8ϋ+PiLZO韧+[y|;fP(nlMilL)N\F>].9>_ɶ}0~{zq_}1W c'բk%wX?]{Sa/86) K?hc~w6Y ' rg@3yty%o3zDև+nKm]73wQ/িeާ,{;oo=a ѝ4J/‰7ؼ bbq}w/Ͽ),u󓳿vN7ˣ^oD?Óaҍnu@gLw$?oiq9 #WxqLm7κ)\1e7NO7f}[kΉhPhbKF>FZkČV IC]Ej쎳ӑ(4LDx֕U ,TY ̘'1S 7LWksw0p8dL4 BP@|"G *dNaX*u`br-F8w EZV>wU[KE {m-.mShpwݍ7t7^n y/v~׼Tz_@Ǧ rz34/YNfI6Y4ޙ_ܨ&̉o KZj͠6-訴7awyyd - R7->^X;Sc3,9Z\R&.ţpO]] -UZ1d1q3̅ˉޣG篍Q5>GՀmbf8ȡaLhqS` zqM˸ kѓ='.Q1dJ2N-ꙜOe.%Fpg ~z-ꠄJfj-BE28i\똶rX'GXXxw`՝Y~S<seLkqr O aTur/J0@5^%kdDʕ\\)ty*FSQ@T@j}f<8.'/1Ӝa}.^1EH"FiJqSd0sSdڛk|҂*wXU 2S0xV{soB;le%Et% ](01h2f,˴"S7JL3c(e\(05Jp%X8" O4C(zKgU8_Iee*=Γze" t@)P) 2"qj0wTTPi @b|8c!(mhlT5[InejD̓wRl&KfTl&Kfb3[Iee@RE\sK4`!rN"u-$3uݪ{ ))BeHb/kdSZ V ^Q(D?a{_NdGџ^^e^~0 p|6{K^I1CLt僅tRJZ@PN5%4.'on1f,*lgʴwKpS+sME̗ Xx9~͋ߕ%XYpFQ5(sG Qt"xbL |CL[YtoijM"0IE/g< gXZ5'@XвYTyJ*+1jBs{m'9)0lO=^m <ܑqߎ)9]jB? 0PΤz{+Գ -[tX Pv<ˆݰG |7eF`17-D4J d 5;W#ڼ]gsl7gn;H SYR>w,mKχN𐛷GSaScF 6Tg%A{)H_3z :N59.D*xt\.KAfÄf( `+$&̧|bG==~y%:M$R<6+ͱ$2ZSt5Y:/IuR0KS]hJvtb8xI䃣q*Q |TkTMLwA6 Insu#)JP3ا}%1@&Q.اJ/bf/"!I5Ow8ԡ&;c[)PŠH>kVD@KѡU\j}:9.ȒU[f+1Ph,0<v0g{HWΕԾX'1ɠ̌"T+xې0(jlL^ VmwWWwz9iz{iPNZ*IY+ӠȧLs(}`f P(DfIB؈3d`:,%)HYފ;2:Cetz݈KHMees\Ňqϻ#3IxtT= |lT:l 7n˓)nDrx)d$_ZU-7pAew;E39bcQovܬ58x 9 Aphii0E1)g^$| Bt 1s s\*%ok.B|*M W FN9U+S[&θʍ Q;T$t2LfL;o Inݫ<&×A 72uzMSQ{mrFNpK@HY$[u6X(F& pΈ]!2kAƩn2\b>jSۆ]nԡOi1ZxȐ`G%ˆ,gyz=i;nΏ@ p }饔NWt@K?ݒ&}qh_·w=/'^cquB ? \o7[g4vv?4Fϐ9>ApwgƇ'4~e?Ggiã />:n?aV;o>/pbz v^?v݋z[@[畝@3[G_kӻd)h|{fҝ`}~5D(F9]|5->8J׻iZ;eV:c07ួkP g`:- _q 9%ߏ43TJzV=I5J5cpW'~S᳜ >\ rE;贚?t@C?$EH\m]/Mf ɫ-Sm|=5/! 9 ~Kf~7CWoa3$/~Wt|}ALk޼E7 _?wQvꁭkh";}F}[OG*אi#uzy^;~.C ܴ0'ΝNzrt_>Qn<^ݓ(_c=X"Y"J)>naDxwI+6ȡ˷ f9oyac xU$?]I6U`A^w_\ݩ4}`"PG($TQaJ3KEeGp *D .,ECً#OpKD%-npWp(EeKz-gu!uR½8ri36u2f*K(D]`\Eo.1Lq7"r5[XKZX/WQ(48+92("2!؏Rp«e"}0hM)1Pn/K Tb@@SI|rBkE+c!^->c: " <Z'yۋ@bM+8u}S Oh| eҺ&l$s]uՌUiDŧ. ~2ڐ.F5ȉy TQtAr2kL(o)mm}L[i2mQ-ʴw]w9>/LZx,&dHk1*Xx. Q&-'-f @,? h lMmp>Y~?p.t{?osHI6#zfpڳﭒ{}!)"A8c2@"N jKP@`%-Am jKP:P;F"o-/6Xe6 CPʹ<A3z$TE8?@{#uc]V _E JNjb >ıPw`ҟ*λ08콛xzvJX<=Qj,Cf0^Q_Sĕ?xNj&9Wk#ym̍WPk]OΉ䔐L'b 5P;qZ?mVo~[>-_>y>6~W:{=[S)n埓XIܜGu}mVIVӾ]"LEfdRwTp_$P"&R]}B+&FY*=TYRDa1H8y\t6ꔫ.UoiՅ.U媋rEbu.E(; Hj&P\5eL`YFx-~UOl_E8?|v/ߦ'kL:dq}RL %3+yflFH O,2㔏 ؏'`S4 t` N 0{PK(A$Uւхd/^,B^EfPJ]=:ŝfk ȔP'@HH+YQ&Sp"6/CHd#`Y'69$$io #!IE2bCQlb e4{v=: ҩ8*?({Pg ;p0=Ĩ ث5562bf/x)e6Yt2a dLg6{Aq*sC2vW'o4c҄žDNuܜ&8Z˅hobkUpr{4vn~6鷔3~wᦻ'g:U5{H!% Ľsڶz+.)J?7GRiNMFJgăËvJ*j.;$ӭ B5N0E4w?s7-hS0e$RaΟG"$NQ#0W H8c YQ5@c ڄBZu>]K* (II~P Ӱ(B] VMKBC'4_!.Y=C`@(z9Pjɵ"DyEm 0#Xyq@D"DaHZDZH. )-$HDb.CXH# Z%rD"fHlf|QkZEn\N4CYDd >,2MhL)xHIF= $8l 8!4i̓H9iQ$- !/G/B%\K$ݹ|b]?Vs &B <se)x!$3<"P:T0 .VFJX/W2jыkhğamnJ/>"_˼IXE0)2/h]2u6BQawyA2ǜ ՙ%x Pk'2zJ!/~3D5rxZY.(n#C~b` u{41u&CFek@A16N"}cw$m7VA/8HUb] !A AN^0j)+KZil7~iԞ2ދږ&HFf 6Ւur? r10H펿"'{\ˣDۋв6cˆ6wU6{(jȐB&ZrBe243@+Ny5E-5 L|Bnl ~(zP enZ“O T`5 Vh! D+om.EZ\Ǽ{,$c ?ʁ6l?Νds+q\UL/FHgF*|xfPZMSl+i!EX (ae0 ʕV>)]T#jE@\EH?jE2 ~ f' `&x3M6vh^QPbC;׈qjKyKƨl*B .C\2EJdMUA;g}:5s4"!D E!? 4K1xŔP(yO)89G'dK̪Y"aFg8dA'@ 604F'5΂m먲ӾAME=-T!9H0Jq ѹ" fK %bRfAA,M 7]>&?bqu~ 3ayސЍV71nTV6}N+LZ= F$aj+تG.:w UH%,X 22)Y&+֗q<:J$+9s)9Jw.Iz9%OB)=һ9F}D <rPҬ۩59WOSO\B 6'pyᜡC\X|X9sY|yU/N:}}/*ݜ|m3V?lOnKpMe\&^o(eWFRpSk5u]>4zsi*{4]˪iѮ#jʙXzcIn4/rr#G9]4m)=H;/N+Y7}f@hoEE _1 =={;3\>C#>P+ZիD!ND{"{T;z2툁^Vbe mL|腏Ra*LI1_0\E 2%%- nCt=C&8..."fn.xMS fZE41YGQ!CN 'ҕȐ聹*skgSQP"&5̂ _5xeO]h~ʹW A!cX&1S-2ҰKrƦG_ Ef9 4nwq(o@RþyƷm<(}w?𼞵E\8i_;[%=He&~ ԆxY~>s}9v9n/%XM.JRڔ kd `)U;9'KEZz=掖Ur?n>{0hF>h@t cΠHɴq_.6=h㎲vNqn\rxkZD4{uaqzq3*:-`=dd>Z09jU #勞Z}WR)FxR.žö3"9J1K 6ӕAv {b:r+BQZeRO \ޏ.l% \"JHz39!dmDԙsc2Lraw]wX=S+lcRĤ>#})PJ~ߤ40dҼ6ZZm02{;&=[q6>8AR?GV Hzã'~?cVU ɷK˨9 m> PCIV(`DFKS>|l'`) (ǽG(u;%̆fyIGYg$}C1xnaҚ>s@.f+DQ+$^aے$f Ht/VWZM7U}=/,7H4FR-R0(U- rj111(C,(xN9J 5C"& QV,sag([`cF3PMK#vl|/$/c$s؅\j-rgo%&߯HsS^ͽ$x挳EGk'y wv A"#܃U3ݙEYl.^sM>]2J lrv)`A3Z|tѦ.,W I6e ;$]6B2kp#Dv)j<󇉌LKp)#K K9ٝG^jXO>Y8Ln1=Y.R:, 5bYn1]R;KI9xƢ4ek|x)?ՆQ--bz`GvߢPl|QJՙ -Ĥ8sRpA 4C$-ėo?\.ޗRU}ԞmNmSAXhOpq8btfxL)!r"4B\sIp6@vy<;wSxrɧK.tdo>e]6ۼ.[/ڽp| }ŗG,/)~po[x񌻊w cڦ-x> G+Vmdʏosy3O{6ǞOG4{$jgݭ-jR9z+07* ʠjt*9uXWrJҬtJFw԰no$DzyuE.^v,_炘e0lMWgs3=5 ?~3pCC7b1Ś)}.|;rgSZ~ UyME`z .VMM1?4̆ Kr[]>Q{h|;]$0c#6T?o?N<5-,~5vMBꫛw5V 7X]yFTa-h9BwkeUr@NUuU q]Zt|02Ս` v;ie'aѱ盖e%ist֛u$C"i gv#$hōp_|*0O(SqbS-67؄:6^J~rW؈(:$&H#-/ժZC>EܝrZxYfS0<OKX_әQ6(}5yuٯ?߻1۫zzzxۍsAbr\*9˒lCI8G Lje2a U:lUxaΩv1X\rBS]$ !01AۻrHbM^2$cq1Y0(N(КDrwx\wu)[P]9qfn'ie tw5] rș+v{NL5I $i[m7ܐHBmcq(Ij2)B{xyxx)xNhD'׊1Qf6+&:E929$;YF4ϝS.T)oc`4dxn2w01- @/Zy["Db2DTCu;L)"9b!|O`:<ҕ(\t}5(,G(z>>[u}+ wMCOH+x$w-.9[,qyqy(i$4$,1ubHAim2 5S99f1VDZbP%]A9V 캓 KVc3Ȣu$L#LGL4RyG<,8OG:{g!< 3 P5I珳‰GK(%[fsFT<˥'w ~/=Fo=;DD"h< eYIQHkI[0v%O YcC4|8zHDyi~C`c ȣQJ=KF,P]bafhOρMMų]ۓ=_vH4@n4T*W5ET dKBC`l٠Clme(9>DOѦHՔZ>JO!.x0;qa-iЂ!Z 9Y靐|'8Hm'ÿ}"x|i ?n3@rk/ݬf0P }]ekgE9R1>lpl# 4=+D*L*xߜH!B7.q((31QɩRHUCT3 ZEX R%*3TrHԁMC0Kr@7^\BO,(.@2[LS$A".kݗHJHi1)IpE"RC5)lǟ:l!YgE~7N@ ڞt^X3B)F9S0-/t^GXUzN qgoyION I?#Sޝr:\TG~,sp"Ljw0@*PՕ,q!y8 `~NHĤ#)A,ng/]!h.5owCmg$;1VU):N||񗈰FR= 9VLZA֔6W)mKJi6|P` ֦W ?X_"f}m]aBal; +\&N0XK%NJܑR~g,K QS14`F<#aQH.Sy&Hl(nj'Y9ӊJ$$F"K Xd ^ rFsh72aXnnrDYF@@ msxrŊ/ _,?xِwgCl=ے  7<{.1Z!Gyaɢ1TAw+%붪g\\w LI`kh@P5IVƤLbú}5xOޗϚ>S}9قRu_#@}%g\jEn{ehb%;xG2\e 8Ի{W4W YE%`m,{E 0"HCc|/J8&\EiČqB, xutd :,q8;95&ijhurEd Ih *PDG$™$)ޞzV`˃]A&:,\Q ˧v T̄UNpL[O8X938j1 5sP:FWE:[X^>n唜ȁZg>n9R.|د UE }1AhEjgU LX JGr9ft@ΩQk¿> BLX1 T^ $BG4TS/$%Pi"x?szι c7 5λ@%3V_wTjTR I.r 7i[vG+/)oԥH%H&  F=#H3G2fOҬi,õ>P*m3CҀM#}վ!0\x,3ĎK0@R@/wO=`~=%J>>Ύ(x4a_QXGO$~3GǶ72磱GD1iCO8+/P]8?sliZ@U~=\ _ggU~Vo"N A"x+d*a  z;9sW}XG\+1u$eG!`@̶-hAv`cp U`w}Rcʏrrb΃mԈ΂V;C|/DR σ~x9@:a;a~ ];)j~(! ]:aWQ`eIcIeRRD1sgUpM;HAj%*l )%s$axidD*Rp3g[2Vu. Uu^9p4'r[ 9Gy_;@ڕxs֖k4 B6\-n@WՒ3Û(ۇ[<@X~#z5"ELlK&۵`#adӕ WiESN1B\W{1N-h&$ =jRE]pN r?n+#lG%P'WaKuN퍞ΞSb:{wN3}xjSw:}1<ެTd/_~v_?[f1bFMӹ8AD]̦hĻlf>~( m7bֽ@,Hf`ɧlG dXؘu?_#be ;_O5yO?gwқl2_nhdVqeybzޯ?uw0;!sg"navTu浡"۬f2zY63G1o=O͞ tx>20Ff'b)ogM/vgj۶_Cw5ɇLt3m2Iζk[[v%9i'LKMk"yy~8g&h&eE_>b4_c .?b)';/_}?w8taP&=Tf(PXaCTr7 fx$ǿz:!4Yxנ*?aU/qJǣR܅pTޕy/G82߳<~ыgo^'St<Ά}T [6VYeieUlKim|4'Ӌhvy"_:]-~lG'y,eEnj%|yVlx=>ܟM~yD`iT|ܕ5d]rzf?p*ۈ3w: Uȳ Ph'OPլqdKoXA/GX8Fr\ethRy>NNF㲥c54!~|2;::1T>LMDQBNOU]9&sr˼Ugz|}v||iewOcuјE_Py3OӦLizrړq|Y쿭׋l(slMUtr _V~>~?/4/)fU[/놻ˌKY_Ƒ5k:Y^AʎԜ-Ѹd_\.oJ&K~`vi{$ZB' K+d~ Ap xz1*сmÄ >CDk_Nbg~o jrSDr\<3_JF.l.y}P1Dxf$;+84~%5(W2LJC:,sə&d2r8k2 K"r{46=CӵzOiQHOɂLqapƄ0LDbz µ7`dtZ(c Ք ׌9dh(8qG(LϹ?q/*%D׆w% JmVٛ crB<ޞ Os6% -(I20b7p+QF$o0jBh i&5 <NKRmXnsQcyT:kdگ>k5m0WttoW](ٔD R׎Ӽ?e&) ڕm7ΉJuUzڛj _:ugvL*v}\]m8kg>׋qӌ^1ވ^=ķ9#is#iW #`&`irA l#8o=zJ s b(U[n[[֪z1߷ *[ICwf28 Iov} 5Lm^& @8  C*cEM+`.(I@qQCDe2QmYԥZD$W;Jolnl9|ATE BVOZ oHbAC  킖pN\ REs!i/:ffrRpALJa8!2-H6*;W Kx Xr1Et#}ѽ1&v oX%TFIFX%fIZZ9)b`:$@w:aeӋ)mm! ~h{K(4=]BA@J+ 1*S"֖:o:Q؍6[!:2cu3e-'JX`qjeK\3r 攈m"&Ao)N(rD$$Q4$ڠ5jJS3."7O?*rQӏ~T4ӏ 5ar9\rr/oOkюzw195pMj#5~ӟ&n\f _+eG0oRRHBvi^JPevE@B0֣%XE3IbI) pf(kbgN2 VnEe@JϜdmBceDт')ǹkeDr{ 6 IEa_RFJCUo0t5C zAsT?d^BJF:5*D4J(]hӼдiplgq?fĢN'(Pu^8^dm8mr ǻ;<(l=!;|1?G+v_T=GޏNS'gOCAh{%#o =ha5 ˫{'Oa9_l qtvIko7_NZQivZ4S=V[n8HB(|7uѿ٫*n}9k v9SnW8-MT}߽Q%͟s[70Lu}N߄TMM)Txb:xpЮyaop$>aPZWGWfNΆnu^M6ܘӽr{R7wzCnS*vs7ug]9gh^5jJ-?|&b(n<5ksm#׬]\Jv̆V/AClĽJ5)5 97[r|vO}|VoN7MFMe][gV}$׋F'7_;Wk]maj3rz罺7Gu_7usHlm=ĝ!%W}ñ mhp7^3s1?*\Csa, OgU mzo-ug8p+bSP9VI'L KX <nFD!FI7ҳje M0S}T[^Q)s"*S2D!)G0J^;8̀]^??'T3Gb2-ٿ@Vk(WRжр2*GFް`2JoŒ8 ,p!6z_@{rBևTHC+w@9hMĚd' J0d@NGr<=V&Q-,Z ;ʈ6[I j{>RpԥhrQ\Hݢ6:'`UγJ$~}fPK vrQsSJ h,.\Q4T801M ˥5Q!@á4IPiU k1 T?nHu9i^'Gi2s(u]Q£6DT!tNSV8K%%Ipɑ[qLv9]vQz!T?mI+wٙ zIz(}-%ڲORE$(i)T ؖgfwgvaFs>qfHIiAxBfQ|Imugm#>zo. JcZ[Zkx(bZm~Qh6RhӇ wLEt,] T׌͢m͘( H n}ZZ޿!er,cFFi-COuC##$Amh+և#Cӡ TH)5F2ƆDnLeH/kFݰ}6w6&+&[{XC0O6˓-|Zhf\MBȬ KhH-k5/r\n~q{4q(}T{P)FI<8\f0!'ԪMoQ'G^hgz̨qJ̒0ڰ;e|sUrNFÇׯ^Nwū?2T=ْpMZޔ@ )jwVS rD8hre526:P)ꚑNK4tg3uchMo#8ҢFxnMj5%3f3FKSv> 䵲Bs4E^Py=BP :NtG]^0%ܡS2ñD׹]]hkG:o?:.HOFR^CO*NKVn$B0q dF윀}P'Wp4sGSw7riQ"JtFH'@8-4CCg(L2yLETZ/PTPѥV)s6OЕ LSBa3\#|~ORw=Hm<Uر>/4bS"Č,G䬄f9fDC`#"]F˄L)Z6˅N uU+4XjYD#λVD#fME4l*FơJ@YUz\U3l4WNnzi:񀙐/ bs Fۅ v,0Wm5Yqm78 s»f{FæQ8ۤ~_Z~CaXΞ\4g']`n=)<ntlO&4/y0/(3"2ӔN>ely5z]e*|nÜym[4릳7:lwl1߅ s0񳏓\}?[Ь+&nvn=sS g ߑ)Q"luSi&unꃭn_4vuBD+6yVbˋ+g%_{רDHclȡzPMU)8{LǏ7jQ>uV̓E﬐8[r0)ivQGP6'A ~4Qt:08ry/~F%t5KU41jvm˿(]2/ep)j⭏uu^r/gzTVwE&ߔMySw_5l?b_ ,d3"5M!k 'ۧfsHD`92A;dCK>DAJUIj+)+JU2JS}O7̆galgIQ0Gi ZBȞ,M[q/u:VU *ibigX˺okA7{jCSx<V-"v6*E"d,wMCnȔu\H̝RFj\H?"buV}s2%#l䡖OCD0q}bJvU9gySL]F_;wiT s]qsneԛVV:+$#5K] MT27Y ϽSR%IVV3׸f\ T}. %E϶AWj.VY^Z+ 4]B RgTd4KLtBy(5r5IYԅA*Ai;(@LhCH`iW+TKnx`zG )nXJpzR23M,Jr@nݠ6k-[XECslo YA'DTHkfVki2l} RסjAؙ%TLC^֍I㶅>p{&tX ӍA] aGY$5榍{2:uF| ru1V㧃Pp07]Igx:,ѭԄVU -22T\i m)th&'y4Rq~NⶐVʕKyϩȱy6MMopCn~HaYWW 0pOլ+w-wJu- TdWy'4&be::䭖 ,Mq|noqPN]܌)\^g|?ZO z 0b.(Z&ҸNcF=AlC aaqe&qχ<ƒ31WڊW#Z sH}2laB]l(T`Ȥ,;^\c־ A$Qq *DJ`_޲a c=a|nb@V=O0jmo܋XE4ג1VwjΙBoe*p~ JRC8}?A80.?w#ݿ7erF^7(~Jc2E;*~Vs кq‹ZQTWz׃l2 G~r?w{MLk? h0|o˫{zaw?z=X'.BK?h:uѫ,3q)K͘7aqyM#E5#왢7Fϙ 1a~+ușI>@ Ri9rʳ9eEmhr~$ 8m%C.aswQMmZ;}i&ʥq):[E/v}D85SKj p!7l+1vnD2^ڲ:.v^.VJ߽iGwoZsdsVz{-0zN6\r]C&f:.VQ3ڿl` ӒF<&9rά_9фm`#?Q޽Ƌ (4=,e3iaryzJFecŚڮ}mx֋ާ&8~meoxBtd_$Wmšބ:nZlC?~*w0X.K`5J[Nȉ#ЌNG]t6 9ͯ$q6sȆq7ajd9{;,t))|1~SϊG*q_*uToLkcIZIAnqh˔x,(2,[;UW*b۬;wT^>4(|NuWǻE.dwrFcL]c֮'5/=`UXOfn*"1d#N~(.^ ~m@ QՆg_'n|pƾ1[pY+%f׊Z%@iG*Q4p\K_jѽ%g]Z-yA4W2L܉D5{ :×` ڗ`Jt\vԝ{/?K._xe?h%4G80 %C۱.?{ s'Vq%ՆѼT:ӼSHLcN3HPE)[hip)S xb9Ve_ta 鮯q"2LO :ݙ\RX/gQĩJ,OA s4|((&*I"fށݰ:x>X>`gTQ:~XvB0`^t˯ |(lx+ 4TQN ?{*gލ_8c;њ'7Kox ^A\OFrp9i'CzYy&T޸ 7nB卛rd04VfNKDSc*LKD%=\[#1 XnsPi ~ Cpf^GER_z=4Lr/Ff3[_zv0Z?|恵jDLmZ1 o62ݢ=29Z5zPzDcx,եjۇmfe70cc2eM 7![জ- p"Kris,c$Ersf J"Rs%h!cR]<7qZ>|o34a70c!t;3^m݄punG[e3v")tGY"Ԥ)E!,tJ`, (͸ZqۇmE %@gMN#HYʏ[f_WI.l'.5lX"vx:/{6|qQͻ7QW_3iv yf>X<j!32ZfT- +sp'\D^ -E4oXixm5_=6lݨ|f?@9%N\ۮmfCo7^Lizަ2c&w,1+_c^Ei/]:eWga>KZ 4_r4KEsT%zCNL7 BR "|rJ9GV4bz餅3 hMMRon?_˄e熻$k-u 蹿B=Ei΋a8T"M>7?Fl>QL,ɎJP `w(?Wɉ(},x#_Shǧ0rq;.pu"OkCt H*u)yVjzS:fDb,/.B魓>QE"-$)"WkrKviӥ(דZpv8ywm:\(B~7&M&wHEW㟍j*-2kV*` WtW ǡ @= GAV 紥_\޿sD5j{uQsF`vg*B5TZ tDݙFjUPCk}:mێd_tᴝ[:GSigZ7Ȥڕn! I޿-_| G?ۇ)# "A>{׫rl݇l@嫿{x]dF5(;2Q pW1(i.%",3^$тK<#'KH&v zh2N2+\,{?V;(<.$uwk֭2w}Gs&4!5x3z#շt@yh&< 3CHZJ"0!&-Z|Ֆh2ieƧYFVBb쟖,dϼ.yEr Tn 5YimavG0[ ϩWBA 0`d$3L_ \QTs>MLCe1hib2%gZkʺTPl; *} 'EoƔ!o3|6SkFadӋl@j (z4 F_[%gt,xZ;=EO(.LLOңih%ev6ԽP'mF :mI͠(6z>6|9V\DnsMOWa&[kťN`,II"Eۍĩ^/|\&dr 1IY30G}i(k'gw+_'_W{½_|}y vT2j_| QTӡ<9F():SDڧF.kh_j9|K |Ia/)%YB:0hR$Nzj%$<#ф0"GSLӨs#K-cW`{:d2΢P\q&[YrL/S>:psv96?[pE$ a%>ZkPPT 2rm4lA]xVw]ūZ}Հ?᝽8B1C+7{u˨_ 7Zv ( 0xQݠge|`@pHe ]P,I{5Rj-zl0H̰"dDsw`.ļJ>F[U+6Br zˍ!z[Q.=+O:` [|,5nlcnR%ݘ E[nnlE[kUF 2J1U:O24hZO7-./vW MG)F(PMHMk*\[+-b ln V֑YOV't#T `Čӝ\M6L.b{S~ L Ђ&J ޮ|ߦZG9S-[ ;M{=[I^y)(7=r,T/s喰VmѭtA -z_3$p]~#E_ CdlAmTRHRdBy+"w9ZDړdP6͖on%ii;4[gp9Rv_iS;}Lw5f? _~hʕXd|h-hBwRzrz-==5 (53FO'pQ1$8p ̶jJjJtF lAFC^9'=8#An= /&V,@@qc{3޳ns 1$#sZJ8P\d xT2x ʻbUq ~|) L_q^B=d!kq[BvA.\6:{i}l~ﴂ] QIFlxn "iOJH{#ꨩb*j@҉H1N6`$z=ՖQT;CPdȖ /!1Rri܉B9ЉAİ{Ù296WX9#&H[4|z$/'= NLqhIh7q8\[8gt7r3.wՆ1ֿm*r_mW?r;}upzg"@eGS3i BqNYɭZ$͙%j}u퇪|u:w}D*ZfȖ$2ќ3lB1Ñwִ6paV7F",\Z($E+eIDx/ ;̶r5*KNVy" -OjsIm &U 11Q#:vܰ*K*@ A^ (jȅ"Xin&( UT"C6jh:;)d$ 7PGCm] w G,Z`"‘q9&JPZf[m-:LL$)4\N8lm#(:V]i i )r:xA! F4-T>_)KJpza̋rc9^i\ 3"յ-pyGZ(pot.`K " ^UJ%VM0JUʤuFj͎ s$2#zٞ'Hө]+?x\[8* XNCPr1 4%i\ 3C` b94`CщNQ1Bat1zJW&[l~4m* M =qs8ܰlN,&4A,,ܢ'|dU(F:N+_Kzeۜ[Iov3Gދ/WlN-s\I0yZ*}w~/\lX/qN"!Xid ,Sn$-q?Uۋk[*oa2fdJQydbHgS(*Dmu~0}MWn R g#eO}(!y.WK}"#>Z ~f}5po//4!ّAW2wq1Ge%xpu/o-~":ŻG>uzPQG…T$Zg@C٠"٫:#-kc T(%+umPWuuV\]-5Ab6Ƹ49m2ZQMB^zlSqT n%z`kK=Xz)}7w-F\C3 vbq6Dkv'd;Frz }cA4q8% $2J^luBݷCfkzQ:Ze7O9PI|Mh82drCMnhVct0wK*m|cE9AK[%_}I[ùL18"tٷ*|+O!"R\(Nmc7,Gp\m&DӨ#8W粥4>iE)U& zuLV!f~mfF1`GXW34X"8ϪVmk\ N@WrAv7WG tj[cG "BI7ŶR$X%m$V%Y3!R%tUOu <0l~z+R"IElo$h$8M_|PÑY Zz SZB4!< @DcS*c.͈[/ͼZ_"v^ʫEbr)N[:[ 5k}FҢ^⪇ ׎hu-mnŁDŽ+%p]XM!+=4et)%6qi!k8xT' F'S|%aԬ.aGow-q0t TX{sXCe #JQCRxߝYbnM+ >_f%@旀@V!h%d{26č㹪M _}x n7O6\fzNkgXRp]&gj[sae#hR`0F'f%-mYJP9Usf*L1NwM9ihS1-q oy#gnFp0hbIF:+'u+N)DrRTq0:;Ou9LR:\c-S퀻81+^YV8mMP.qooBa Jfo 32ٕٴ7R(7EQv("M (gfI\q&mΛbT%F68>nV"&3ه\w?o},1wIFx /$`S5xl!Bxwd Mg_ۛѯri^/i9xT1n}F̌x,8Ait^R"&r ΅iq7e^B{ 2#1ڷ.5 S%=Z:\]s:tdVJ|&HgL Bv{*tW2 JW:d.Bepp]fAwRI|-UNoo օZe-Uj$D?\qw 5U 2:yU!s)[5%*G.ٕԋ'wQH=zށQFWOzN =DŽI#@PD%ҤKŶtu)}臇LMU j$3r<áE4úGW58}{c 0Y:>ٟVs;Y?ʇoHU=5ncF Rx b]'x۰\`yӹjOfcb_BC:/9wYqW5+#{oO:8Ťݣ#tKz_jE\` W,͂JQ+u.PXj& j(8!ްnK}ޣu5ׇ_7,8mgVΜVUINK.(<~_Xݻj֟ O+>>//N " T:9ӻ)ڊ@3g]g?<߿m("*m.glPrڅկ<:>~K漞68C١NTTX>;bupSԉ [Pxwjc0 ߉ ߮YS-G1^l1I=+PRE4߿+FC0rf& RJ!#8v<ʣrȕb'~[Iv@YN/҇@_pny˝նѭI5Z:c6k]˗u3nsz؉5@)Ra6 >#\eo_ r3a^8:/weu9E*Tjfey#:;ov/ǬP@ [WOqv1TxF%P8(v1k"Gl4/gDDL~- g!N37Z|#A,ċކYK!~_k*n%qžR2]#.C~Ztz "p^n{oգ|EɌQ5#3ͯ/Mf&gD%>ГIcɍ!:IkCd3_ޔcá8aQ8{)Gb7G@pmq11՘eù Ƴ%! En3VԫuTP84 ۤ%)l>.Yg+÷*дչ j#UwOV%I$1>f&DB,E{k%uN Q2jGW5G>))1aNŽKD՜pGU_Rz6y/9Ϲþ6&sfF2œtzE X^8$;Cl7^Rw,8#~[0tKpnY=,!Pn"0\}.at2A6ީF442KKmdKL`xS6"D @O<1Ii\#} u_/UU@[ 4DVUdꃼ9a$OWyH B++«A_N{~̭t Z唭Z,+iĺ-5j9hOMbr!0v'þB~8I~O0%֬R]5ĸ!$F ׭O||:'I׌Zц '#Ǘ1>FXҬy|NsSv?h\ALqT]V-w<;R rZoMm9a`v={R")k }j,򓥤=^T'G"knDKvXKHV AH_4s u!p2c@NDu{n{8%:<\KU KTДȗ){)z/t^P+Bi3͔imRCw0@߽mۛ_kW<4P惠5p#]cbiķFҰ_aFHE߬tC@e"fDJ#(ȹp{KwDƒip6ia&GaA ?$tN! 4ZBX.4qo:J82՝l%w|حC3U@ tk-#P,CCcVFgƷo}k`RHG#A !Q/)8ڐb+ɺ̀phGcI ΎZ 7uh# 6eg( ʅ< T7*heZ amxL*oM9Caڕ]SHڹ:%U袟.4gDuZZJjO,%yĨ1%QVx:Yu5v F'!aM mXblNZ k!`  箵AXJV!֖I`C1v&CE"D hqʧ K Q`md 7w c'_|_nއfi7km(V(t.b҇@j6bZZ7?TA`T!U#YaBi.ۍnTKXl)"핏Ą`/"SnT5Է5^$2-CSN]S dzgUAqHQfN GΎM=5N;hFЄJm2u<`8qF~i(ůu3Ke%GgҲLiٌ;=+K(c7^ B:EtHB}Qݛ+iK:>JلRDQpdB9怶eR]5b-|ūtZn(ZjcvhN7RB+C>2N m`5('dOSpb0٦wr56m( L'yYI5C}#6ݯaetnG ;R.?x.cS_NذNUM} Š' bG:E?,x}So=~~woߗMNvMN.tZ>PR=0Kyw=nH\|' C`{Ad\0 l[O߯(ifZR~4V$OKM>OXU,V5PL6HkSd:g}$\9qVC2ՑʤĮQDq0$Ro% z7ڛ4k /Ǟb6>|X ~c(y ?Av?s_~Τ1}s2RK΃kΧ%=\)DI`-†[nIO]oGgb G R s]rYtV)FPQ r5]==~X=#0JDI\ggn߉7hqK*v܇JФkXs קy)|(NyxY4MhaZS O@t7>RFNwJ~K^xQyspPPr*/' )}r }W)xhycxtj|Fc.vB%0]ZEa?΋j}p9=st1Лe/nafʛ-2kc݆[jFbBn,ahm#mi6Җ#0l:*6 Xf>flUOw/yΕ^vxzwp]PΫbkާ8tزI{#bVCn":u黚wR{BA,hl0.sĄR4j])k 6!w]hSdK)& -fSu~ŰAt7.Z)*ǞK:IVq$qt?ZxlJf ~{؇QI{p;ĕ)Zj;VX[i}4rP@-B6"Ne< QImkE'ISQZE$fjq(`j i4v2ݧč"hӳgtYAmYJ~fIJj?V 2@PA֋4NghP}"Os(+ҝkᆓ#W5eU\i@H@% Ua{G,`5=!ao/b8ҩX7b?ߎ:VJzm4J#0i0D%%HF{b2#ة;ulk ٥]5nv?]_ Fe|ix~=GtPF\&ßë^E*Wep h-f*g<\h]-evJU*HbdaNI2Ж֌ 3ÒWFf3Tc Q}X|āC%P X,9Z| ~,>&y,L1I И}օS7)+~Xy|%s 2\ŸVAzڸ^\pZ\RN4+S0HζVn"r#>9ŒwAYDk L` A #8_#uTZi!JKVHRlk I9żmoR&87rkm,qV2ܢ0;\=BSI! ><8 7-a6L%T2e7G4Pp2(w¡t`A3QB*mD@5Gǜr5RQ}u 5h9$1%^1Gc:D͍?ec+ iukȍ4/V_cŗ*m/_.8H)jMV%4qB ZhY:UjZf3r,s~1P UZHT䎑{!>Nh?KEРn"m }# fh,mV})ژЏ8\ڄᴔާoT8D#uzg)ZAQU$=|,#ZrF"}JR-Q'GbI]/GƉmU%les9DZKQrB#fzJp/eGFCu<%vQlI#N5mV_x^^O_URėwoO^$N^,$nm zkq&ԟ Jmc(D'mRkϫQ2J3)H{(Ƣ_v"HIMAjB\6%ӃKU[E2F$j(@zP8в\C!p uX%B..( @{pl+ >i_+L jN2!I.#pY4#ť^:M$%hWk\xyanXd~ lO4}e8h9т+\`y `ڨ 6@A07g.ɨ$xތY@cv`@6vB gptje5 չT8&*,ϋ`F(ӉNOJS\PA}# @( K(.e:TQM7:\6|k@%I&hQir%6ur R*گӛt=y@+ 7/2]_\JA' 9.4@xK_7ӺY"V9W;Kgsb 崇cIH%=lλiH ^cgbrRɳXucf(ݯ] yHBc`i͚_ ު/L'l/S(nf:ʠzNP.Q;V~>Q')^9_ DQu@(>VRwD1QfV,Y@bi.<8 fU?rꜲHqkh߉Rf1jZ}}}7gJ - Qh};~1#enev)Dps%],+4oGYLv1ōMl^چ_݄ժV8\o~C@k~>|ޠ; %K'aհttt] {l'c%ڮ#MR(jlGHS֫v]r#ڥ\լ{+J6&Nf*+W8ٟe,=ɮ']jzn'jNfҥgu*@:1ݐ5iyL3 VgCݜ =Ƿd\~}NǤ}&q0bk(_2ތn]L8x#qX%ȯ?wEh04a ٤3>g'SUj]zN.2UHQ\eٻ޶qlW|;(K~v;|(Jܦvj;IgGv$HYSlV<<<:I/ʡr?4ҋ1zOW4j1Vн~d#cr`.^{*$1&1 I (9AQJp1l$ެ@#2{ߛ-p( wh$܇p0FQw^ ގI ˫_)f?Ew# `›Hc{m=&vL ވC/:'ڽ߶Ĥg/\!BʚK|tkz{4~uŸoڬf0ejQ/ t3K7/3 z6&Y2 @.S@H)p*G%ZCr9,Sq 6}%lfS>dql]\ |Dŝ],Go嘣ŵ) 0^\Y4^j^.BBD~^:%Y1d9e,$E)9#ȸ:Ȃ2%fFH TkD]U᳑6^4[ &\ɛIZ^1M+t/ۚOȹKkiq3e"L0Ay ҘNvL2͙QF5zLW{zvkKgo\52nPGfm27mdjm[ bo!lj&.dDȄݭ;bR*Y_a5b3{~3]^淒s`'vzcKt%֭6ޚ<3Ӄs$HW~< Z/Ngcw7;q>Xfۣ̌T~eOΜvq!%)JKJsID.%27ym\NDU2s{!{upan1hrW1ejt5Sڌ0ڡ"OfDJC^eمL/޼i4O{ H=Gp1!,mIGk LfY* OJS[0 ш'Xiw\ M^n }q6`F$ogW:L?8^՚\ާ̋1JY @%;^ /lLv :K$_L@d?P9!,5E#2T?M DgRXUOcu/X˴ nZy5ـ8^ť%H$ .Zml&K`?`0K xq\bE Ot6s:I~&m%m@v״UWdt״>u {E‹r½4%MͼԸI"|kl 3wf"D> H4Z!hϱʟnMDؑZ0>I$ZJʪjksg59ۇ&T06ǘ2zawTp=|F.3pIբo`Fx ,l#!kJOp DK .](?w\-uPD(b*O0~,]gR1^p:=UL;=OPJ: $%XR(e1|y1\ ,A knǻJ}aGڣ@0փOĈ@dz>䥘*衚xj)JD]i{Jyjd.Ho^)fvW|]6AȈl ͊XGPp=c8B-u_>EB(x`FCܑ~קCEHܥn') G}J`Y !E$ɷ{C'M5R.yR*"sdG9CY % puxv馁06f h~$()z.r F)[/̤B`^8:B dc%D!_P_Uipz{ m=?;$56@"̜vDY1-i!h'N)aq=kt(쟭V:E ܴ^(+bw8z}ÿrqcbyD0t#NPjvvbn,*3y<R>cS,dz. zYvޘ>4ۀwpu^ b?+BhNاM{ͣ|ܪŵ/ . 1?2,=% n !"HW0 LI̮Ё+|8A&5"qpxaq q1V& O^8Ʊhj]0H 3x9ws@1K6NGInk),%IھL Vbb9Kg#<*XqaQxVxQ~k@v\ߠgiBL 4Vʩ"KSNRH"yq!U*YMH0S7=_tY(O et1Pb{;Yf'1GkSaƃ>HjΌ#a[5f_g@RY K6}6} T%cMwO@^2eEP)wA2ȇKLIJ*4 JZ*$zo(`xCDp{-F </QTXVt$t\Bz|FOJJ<^xWi)Wҕ6%B\JcF^8(Y'h5{7};q#2-Y{g?w5[W8<+U?~L|jDXKm]q&, `$ yO*zusxk]̭L6e{TCly\iJ( eB4L)oX.(В>/WgV/Yˡp2\? $`,#iRP sȜrc)yr9fr,Plg/c)ǐ95j$SW|a p Dy5/"eH"Вgzʑ_=6d A ЃIeލF՛3f!,qt7uEX*P\iѨU4㺇BL-ᤂ2D7R,ZsmuDtDeen;pr# 太FUP &rSb ]e4 PW6xaNˠPpE*mHSm~34O>CTz=:DЇ2}/cr!*C< ^c' mxQU}QCTU_~Tsmv7YƹϷ5y7 }Qάu]RV!Oי{oN|d\)V:aQY6 EYnXB|]jIiǁi}~HdB ktN]6 z6bM+k)百(5]!1VhS.elr\wWg`Q/3Zku( $ &˸2ceJ/?\p):ꂔN&]L3aj$XM}L޽ۇFd̵ .L! k?O.KJAP*Bq3TƷ/˜/u~17gshWq8#8H嵿^ʢd,~6jMڮF!V+)I[5nah50ݳb8ΪTET2eqZG-qLS2fե`9jXa=^rǦev8jSd>l>ׇ_;#SH娧2d.St\yz,OyҙMKlٲ1r%q$1FQoS$l+{}LÞ0 ~j9/ql;ފlzLraΫ}lh:M-z8ػwz6JҼXuAjy2[yT}kC XOB?٢A;nVa!M+0&ꋮՑJ" n',TTj\h?g!#Fqji8"iWD'*P">F =EqRzp㢜fO]G0+~Zҽ9IcgUI} KMT_XL,$"s2p#}>$)]+|𓍷q'. )M9Yo<;jہҞNv>|)/$~K-Gj0mlGJ͆w+{_)$)kccԁOq >eJBX{0Ḧ K槜+_T۔vTb1yUdQz<(\'8ۓJZI%[ &ړ ם?~r.:o]=++3*A'{]#B;O\X󖛜z `1aQJQ^[кz ͧqk*J~TL  5 )$ pP^u;Ϡ IS (p%k|cʢ^Y^YWʢ^VEPcp>I?5yꭲE)Zu$yndQ?hG:14&&?N4"לyϵsV#J@Zp :H&U`SJbeBr^"z$ R%ꠕ] S A0fR+ԣMHa<5%04w\vL} ߊjpx`ZpgP"Cbx$HMRpũBJj"QƼzATP`DF_ X?w5ph&Yl&YI|r۪WA.xѵƭO&Uh$ʄr|mXϕQHK@q6TaQg8 g(}^τ@\R䨜`s~^P.Y3ɛRPo*N$vVH"Q߽hgL?9%w˘8ο_nmϻhIgÇE[b>Ճn*^Us sr0Ϟ>?exeד7q2B\~"53R|+o|Ng_N{n}HNy{p'J$*5jv<ZBf'^< O"\~P{llOK\9aa__^SGNeqHoj t{1@Qƻ?Q Pw䯡JGMZJG}Ѱ|JW#㙯PSŀ!>TsU68>,bn}zwh lO;t~#hi8G7hK9l)̂[o |Qj8OBՎTu<=}&HЋ)7?r 2"icՇ@~э9$}Lz; 9Ӣ1} ъ7.=W|W0Ups!x{z}|Ckp/ص~X"Xsw}>xJ&2GwW5kP9i^YR a}#>ok/ƧDZ'BU]+εL ]ooǽYR%H{C5!KOvdK".nh?hkzn3;p$y{a޴#I.[^:- i(b{\,LH܎Wсvq7N-/۹K,2VN(\bAf x R8n:|svqoo/FCw|E]?w~;b|z8Fi!2ï]87R5 %_LP4fmdssfDg?" Up3Rr > UIl;"{) \4E~:\ɽ!.Nb9ſI^%-$=x)sb5fkΕzU|/zIoAL l7{]*qJ(f]v}W飒<9۫EqAɣhp2_=J*pjK=[~vm~/ϥXv>TB'S-BoV [g-P)WmN]l垞םۛ^ҳΞhA/1gD7{:d »:g? ֢.vd0s%Ә[4c罳|MOV\ʪLy֮' l~@$lw,PĴȌU&1$[2hsX5ql.z||Q|Y44MLÏ+]G.U{Lد~ַ>% 5t߯]\W[_}8ٚlWٓՈ,l^$\,͜.iy05i|LyX W:Lzux4K[kw'pC=L [{}D^i>%i lܧkTړ+ fg]>2Zh&oݙ_12BX?jG)G;K񆗐w T_ }vޕ5mkCmkJЌ\M2Mb4I2.DR$%QE,<p,/o ̫5zjLģ6ܒ筞t(#s(>C&frsibT3BPGVpP҄~g%70Ʈs>3o ` chkb:%) kإy_} $B]dfDTuu#vΞH10|Ԧq᧲՛0d֮=7a=3_aaY2l-F)jd, P8i\4͹|LݩcG@,z^7Vl}oڹB5:zEzMuNlQ[Rp<)]υiɊ輐38:P#P?@i0跚M ԰DBn&Af.4t+5tB4 N^f@Ş _JB<=iriGs$'8&.RtS=f ꛘ |`\0w O:&5u"M2Mx{ !Λhix0TCwOeD}W]"Ւ=i' DH-:v [`xO;H?:>AB?ˆ`;+U+Ŝ;"u=y9&&bIEAapPlBO;UdŅbD%L=y`ܪ'k}jt8rx]ǭO5q\bn'^ 2Z=ZW9. IUE:x d0a}\_,ߠ# 40yx䟆P+ӱc0aӪ  hSB9*0Fd0n.t B1@Բ6?r p㍃{ĻplP7 RNuGt&2 EJ6Jp'zϺMB.Qh*Ayi &f):mXʇN'] $1vA2jd=4f٩ndc`**|߰F xVr$x}|:R4`P{,>VgXsKlxa0E@ Gm)4\>r!CA BiJ=3_!V_bQ@3AF1[l'|ũa|t#]hЉl' VxCm CAt׵-_8:s/}ŚDy&&1AsṞ&Meri`,81\ ;2mײm1b584/C1VheGկĘNa5ۈ[CC` dPl2\ 2v۰a3,㓬7 {:#C;LDK5:F6gƜضHbK6m#˴LbkU}R#iCJ|{ӳe?.D4:nb 5 O' au\z}p1sJ%[|.?6_zt1,]3t[l/R x877a &&QxEJbb ZՊIKc jm nDS\lu1Mh3βZQrq*Ձs: &`n+Ͽ1zh0*IP cҀ?pI-[Jc+ Oԇ>dN4P+ SXN`ŒP4 |ar>>>|,23[~-7p>` K'X#Ա .a-k1paM":i{ `8PU[|Ix}ⅦV2sQؘ; [PpٞDc;d ApĐ&0蛀נ+h'vj'vE]݅=OI(v?1pSDMN%!U[ǀ6JS! .smOA8k eyfe'1gڔc?^yIo:YW'+%CpI(  %H5Ƕ |;2gCZ7m%\Qd5ĥ_6$آKy.NߝU?}1ynk{?¸ݏz?B/9E`}hwG-abfV22;?-ni>2_~i+v~IKP'O m_;ӑJ$M7\;/9m%wJm"2nSRR>bU7eۍ&aArkK ĥ2QB:X,9L" `2bOI t ꇄFQv1<5n 3_2d:䗊``qxUʻU aMvک1)Y kzT@њJ^CJm!swy&?D1=?p2sOM{'4'0h3T_~^ :ًBrqZ+ ΘrkRܲŭ@* |B)KFL-Ed9+FKGane#E؎8]jc\E O/g-f cXom[!.ExrbSU6Z_W?, V86ӆ _ EY5ËWΒo<]1=8OƄ{bD\lȥK8H[^kU,w$ַA*+t٧՜\h~3vSDŕ/?T"%`>}]񝟋/&F-j{ &dpA&R۾p7t΍fX2sq)͜:F"v5cþ*a&& 9S|:Rj[ xYA/f-w~R;ݰHDŞd_oaocֵ;oNnrN9k/^,Id}ήԮlŇZKg9p 0=8t\:@gr¸=ZToV]KhJvpBJ2(OCekxϷDdkqv6O6}W)ɳXMH AVd'FN_4xRJ0(AtbI鵢 `\j %c@ti+J{L`JDzu O'RǎmI. 1EIQL#Zgl9f3'sEx9$Oaw"\I~y> ?䠂˅ V)V+9;ǷsR0d]xo4՝ ]SEs}Wsݭﱫ>u?J*1ڿ-wͬ FN {}uuJB޽8?h} Kٱ.մL@1#zx.M/ WxA#3ov#KIIՋ9w#GaGsp&N!j٩K(҉cqBMd;aٻƍ$Wz@]=Ldc=k̄nztpEf In,̬Ȭ=9 ~̀ζ,Ύj8C '5N =$R>+퓝գEC&Nfo;'!WձaU?/m8h_{xoԱu\a/H#"2 2#3Aq œ s$rQǫBsdT:G1*GI﹖8`)'NFLxdxG,2ՅtBC>EC~)0LO)i&AsYMχtY/& t0 +߅q#-C w3~0U 2(er0l@ wů>6õxQЀƺg@C8FDd.G& d5u( =h{9ضARRARq 1u8:_q)TM'ńQO =Hgw4qZg0LƼt(I{헌93j_2`JhKc(_e%]$|d*$ƨPJ0Ήlǟ?o܆BD=PQV4[1GgC*D8>s)τmc[v '_$*2,ܧI߮??: h%Moկ0!vߤumAڛha8+юH0ڌ7`IL <#dy%BZ戎N8X%h"n~Z$&40!ON<:0E"sJQS߹Mвڄ9-~W_\,-c7Was}}uvqlY,YA +[ٟϒAUb \q+ngB\FD)IZÍզezkHHtTYDD. cGᢷT/4 T'b&)Xt^SJ1ƎPeגsʂCD}_o&JR" Ykk⣚ZMG#\0p-XiCP}YMN#B'b*OKA e}P*񢎼ThHgZڹ枵`\j*:4\*:i*?E*qޝ֤=۩]7B:;)$?S;؟;;?iS=>#Y[S(b[K)6,4ڌaD QkƁR 1O Kd6=l]9c_g[|< qdxl~jJJЯ3mENW'q:N8H)CNX%1iYӦ:*#DGED4RFǼϥ!S> ,ؙq`Ui7ֿ B qI΃ y mgTO o_4%?z; /J>#\}9b3GKUh4*3QJo"`nRD;cKU۲Dbñ׷5:`EXfY=w}Cm&=ַ4c,zm)b+wSS5ojTwOD}wAo$w*ҜT< @p7f٧ԘeI,oZj' ˆ2MX r E{Gs Qp;tK7!{t;?3LYJOn6_F Etd-"S3t&##p6&H HFPJ,aDDT^K9x6<@ദ|L|$^#ǬJ`rGI09kk\J6PBaydH"8}N ZX!,]G9Y#[&&fSV 9hOdzCF

7N4elXر?պˡ !*& FL4Ej򢂉HpAGiDLcx+YڸȅA`8|'.-rS*DW*8nTJcX3ƭ=ncKS~puR}uڴuI`Ъ6rKri9sF>;W ƪ6mH Dz}1}*4HPE7ާGJU/EDưMUHv*a":t HtV@VI].tj C '9G"Ir9]lea9͂Ժ R Fv"[uY @,t#I4 J#WVQ)7uINp1rD ԟWܤ7xM:^qS>^QNypDbV1ꍤ+d%p3X dGE/hVFMI)څ ԻV)zARSsFՉn^ͮ@ll{yB{_5y{ˀ=$}]K/> v)yF˄]̻| GzW ks"eV.mnb\12@^d A{hԙI `H)kQE\(-F jCd"2b lTѐCY#2]y1EDK,4F# |Q<015klr%{yZyl83slq0x:=ѢQ.@^d#4dގ|Xhvd[axHilaω( {M 4]7P?izA yJ0LeԺ:u AU NR!*9]jZuu@KφL h}ՊS)0mTRodKuDT\$$e~V%8n,b!ɰR +^ TQ64rĥ#fH ]:#+zۍm|bhdjx2kRY_L:AK#~ߐ Z+VhhG v)1z8/Qu14 S042m)i-ĀHͲ, Gլp}?&goj2OP-tql"P֚9Fu:8zby~t =WMIEC'hh 1}UYuX-6uPWR.^b.ppSm)7WS9^E5Y!ȝ&B|\*|_^w/e֖M\i+޹î zv^mkvtLDZZ[w^[X}&$1։EAu681woXFmU2BmKS;4Tqq~7rEY)?C p-GZeÌv {AS>+ҽNVƮL~mLۧDQcmKl`&d QJXƎؕ"h`(b-S簒coºGJΆԄkV_7׭#pԪ)|*6Aн^3E}S>ArdY0I':yYӽ<{"`7"ৗyX|㹃ӭ }Nשׂ]5|}UеBԠrG@#t{۫s+n|Mi4J8Z6n}TUѩbo߮qkM!̺ݪb:UQƻ0nNi{ޭz&АWt_KC8k"G2 m;NciLֆ&d)۷KߙhPalMDR:WA\F+wnvW>y3?biȿBdz`2bJTN=OEE5& E.(gXP1}ߎHa)/Rո9dm-ak)QpNy `~KB7p'U6c' )F`_uYcòtHDHCtR- Cnj@8%y)c8zx"j`}o89^KSr~Ln.o/揾sI/ϮR7}O6ԋE<!Oj%P;q( <8yd󑅀~*{q7\8>" Eb$y\L)&o^,~Px bz>/15~J^iҫ$ X Dk4RJҀस︮D9"a.mMkE>oMQ eF!E ԒƏ=qCʓŃ`;iJB7|hц _m1g#N4s%QjʑbV"ÔDc ߫>R_,MXR̠au2s!i.QCP+GdD#"'>\ qKm\΅!j-N׻Bu0TA g+{[B<:yộ?֮FsԨZoP)q@L$IIpY T\#{wR[3-RSㆯӪũ*nYPd>X ycY܊b %]  = ZLeu, feQ m4:ݞdk~qLvq10fԴ6%7]_FMVӴ˫bA4 ׍+]Ėcս>o_ $ҒtxY@͵pΧ>-ݶØ ͵_](IڻVgi ʭTD7]V2ԚO%\ȎIT'Kyr<ԥveR:;Ϯnbձ %[!Ry)\u[hjɶ8oExlqm%mmۜ&52Tf?@ QTaxZP>_:odhBBpQxx;wk@mE%}Ľk1\K/WU~J*Q(DTgH#=!VFG-#sW+8GV{B$󠟆S#r$亊1*F)i+/'.MX,x [~H  m5,yTzoq]+.}<uboILWZx"/x P}I;>6֏F z/tiִ iVɖ:(4 dA!]i_Ilt8fmmb[FX)m֖Ta`/u.#քWxlEHŖ2V S1DvJqQkƽn{9,Bg5}ᾒF< R4(NP3 ŸųQcYh*IT}. "ƑsVq,ʔC(cB\oE\p/516[ceU36Vm"(ՆIt,?J 4%mTs7A/b>Nn8rDR6{>N6$j$&֎# !vv?ND˖g7IESٻUH=/ 9>n[LƙGg,H)|`rLB,-(򄍬?s񔌤P'zǾCNJp#'n#yCJ.IlL? :$=I"j|}wAxcП&J!n7u 1m+R}t^I*Ґif@1Lx$KGg 7e =!FNecsNZ ajo|[Ùpkn>-%1|ȋ7@~' 8k׏tN9>L(w7}mQ|" l+]/džfY T/3{Px+n /NYPlgQ*rkmKR:e[O>:ӯO.mkQZ퍕y"\8.GX TIb5!RciNKVU۶$^S"sTGm1a^E 0l5;lXoAsVp:va.PW]F.%pKA ,܀;B"XZ+R 1NSȧLܡXrb[#@Cܿ396|w֛:0mi1r y dY4PddbYQX:G;G1؟_̬pᔺ?+2aȻO.Ooܿ5 ᬸ3F\i?Pxuloܚ?IpO\$9BF/1|m>&?& 7])_oJ}|0 -s;:oOOOzEʛp &A8co Cj)D@`N_,My3: yaRng*Z"?ݴàj&wnR 6/7ʏ8̞>Cmexb;;{@cEmb>@\f*gٛ}}++0 篮d\an I3;nŞ;)rE.BWb{/aM?d+WΔj8 1WO ?(LV3 k0sF__+ZnEX[m&9WgBKa6 9 ʹ#RI+ڗMt&eM"YtH= b:n"s9JjC$"9A;/KӟpSKDŽJIG%1'}.ow GZH c4ryQdlIF`q:W4GXG%m "WA+|.Tceu:.`YE#`VIDM Ѹ#_ n}o0M\U f+fvBU1|R̵<җ}ϳ1; [͵/]^rDlHC^ҩBVZٍlsb:uQev;*v٭vGcg&4䕫:eżLfҠ=GNWMۛۧt|ūorxzqd1()>>+8LwWӴelel@迴$tGrKɊrUFЗ(TL&c0ݹqdLav(O8hkPIC: 11{$X} I0[[vH$6^IE0m,mD*/İ콴y$ &*(Qإ",kN8E`|fcLY{ˑ3D"VT)I3qBûb J787Rsƿ~kNV;\mpP W]c_s*XyГ /<46`7L12p Rbf w!&K␠j,0H!.!Lb(,%+tRiz L#vєVXV.Jd6JSɎY )1-&(y(Geұte<֬Q*![ ^Ŏ(cF```k="PD e[k$m T78;RAK)XΖ0h-Q[|1qNt>@(e>dHkY@"#@ľ"a* Zc/ 0٨@W5ʩ✱Svj=NWy܁?XrXaƎ=>l형HY1mR9QĩߋQ}#T="Ht_q~H+9^@tEV\hpl\2+0 K@ftz<"XlHڳg}Aƣ'mCcS(H]Lq}?n*υ#9481i)V.0^G/7oa9#&+0[$i2#;hi1B3JN.Tiv0P8g´h5K.$MLRvk_,qh-(N]qpzj&mR3CPXHԬ;2Q myeEtU)Q9pȇp";49"cTє&P 8"g $żK A<4D5#^/5+deCPkec.OI$ I` 59:y q戸# >{ב1-G'JKA[Rբ_=+\v7L6z՘` %f> NyH% |5KCCp vЯ1Ktݍ.(l|x񽰫Bb$W>$@&̄- >gHw8KSBQ[*THo.S|tb{ML!)*Gh6(ԙ ٭#A߳nmb$w8݅]݅]1Dh{Dj*ay|i(-10ҧ0bLПO3ߠt>KofWڼ(#Щ7P!n0F'pW/?X +N\/"J%gaY^2*řsNj$7rIL(AT( , K bgJ \إ|$RwT3]ve(̪ ﵠkXmkGj.^X K_?98l1F$ RA{Jv(=Rfu1rRCN`~IITc0YDQj9"CTz>Es~9RQT9 τF'+{p]DZGZm]Cll)lYM Na[[S$DŽMDeJ"v LI4% 9тDH"&u+ċb/jn #*7(Fr GnEQS.!O4rۿv1c#ZVFfۗˉ瞷EKa!::[vW\r>^Qr^B#]U~6W8"+[/ 5{]YvM+p)iĂO?z\~Mj"W_3Yt\ f`z0kMZN5.infVQBh"P{ S¼(CV#C}Ơ -UEZɬ"ɠ iPPHDXCvN0"/XGug-3@v$bg@0JiF$u؀oSi6QcE6EخRѣ̂Bev6LGw~i3i~yb8ü*T6OwܐTOoL"F" ?d;/ޣ7>‘o"bJ_ӻ'Bq>N ':WL?8<, H`PǂǷxvJMs*c7]X,5G'Tݓ@]y((2)*|#]gbIK 7TiW9;gl,\ρ{?eY\nc 20nڙ7^?j(n&&T?}t2Ef0|NrsZeɣG'D#J[2W|a27:VeXc9^wO=>zhkr *&kuDK Kg:CafZi< Zqʤ$TKja8;Zb] Yeg.ʶitL֢{Ws3ZWBEiJHWCJe@،yeSRMmjEU Pڽ;s BMU.UH30!1-qoѫa-EC 9G9G >G_(!ת5]nGA^$OFa9,q{ O7NvYcۜ[|cl=*]ko ~=|m+/_Qo |Wk @Dܫy8Ԏfyn-~1 ){{TJ,2*ωŅ v76XFS s5v<" >/ 9?f'ܱḱT(۱fkX5 *8BݨJrGs(|XCA]߆/n<Ņ( coA Y kg7w7Ҳ!/]?|,Ɖu|%xLĀxdcwM,|ookiy5N~V.V Ƹre%(([[ Ng5Ep=b{Gs}<2뙅4AQ .M7&2-t'×t|~nB6NyNv1G+C{tėЧwN:ᖱaITwl=JIɩ{ WɀWrVɊYϧ3pxvgN>V:6& EZh]ʭIݪ+ '~ npR6WAaMRi6a%${o56KԒlf=9#0"٪2oJ߈zk"0Ǿ7dD K$Q$EVB{{Za{:Z3AqċEa8ʿ%Yoɼ?]6zz>>ΖwAbH\bSӐ$'S-eTW rgukBYg5ޣ[+:ۛ+"j8U-*-(4aǽdvy>κÑVX PYE⛄G λKě XEܪ,˄ӆWQ©vVq;e?\tec2 ~"kS(xzef%ٻ֍,Wݼ]b&Y_\j+%CA9%2eI).,6εEWuT첲ܪS9:e(҈ʮ5wz[9J{]{翋o Mx #tw! o;mJSBt̡t*rD4i~:w)(n3<.)|x60|[dp"g"cZ 4U Nۨ@F8o7RpBMd!@ẃJQjn\rA]/fUViX'a6BFDuTrF`*"$lQjƄ+L<(>TI(;^;uLG UeDf"a5tN 8#B|<}ACnm?XbC3 ˘#"JnyDF=4[ sVjy4FWcs$-v~yMmsO}HHk@mDl] Cloۏ]{+7%w?3)[ԅ'J#zaԤ0z  ͧae'&d'8Iƺ-qQN?D 'sTBzA8] /k[RD.+#;GXG:υ_/abC2+Ղ"Qp%TJM ¤U1G C,PK O*QBk؉J VLqQWeBpi4E.tMRRO_?}~`:ד0}'?\]lX_blM  gBwtp4xO\"["e4,^o1a%) t㖛*X>p(_ %]˿8['\e9A;*~~ @ f'٥xi,?މ:5VBvD >z#F_s8q'(KCTRVS-e4y~ ¨[$>\`Ln1Ja6$z9%|ʝYz"0C4)p%@J%ix#/LWIK è&V:zSH)IJ|Ɠzޜ9\[urڃ1f8R\ qiN "54: r{z\`b0s=\7}<܎nQEtFٹpZ P}yn89e NH[h`ebtJ99Hw$c*=~=< fa;;s~)# Wឬi)3?.Lo`@6۹Ok5O<#k)g`&yS{su'Jv&-ۦ+DW'a/@vzd4Vx]њsNц=Tr;7*pWmKuCj޲8-y QN%Aؒ:ȥb.*cvɤ. o~1-ia^'q.!0P06C2CR^`Pd**U{ӣj1\@cB?)jNjɹFE'J+Dͫ'EDDi'HY"I+LCA'c-&V*R&X; .d Ƃ5\[[kE ؛\+(])zNK}B^tN;y2â 7f= p/r>Wy> CgmlT}%EXIg }=ǽ0sN@1}7 ұgi]+4w7ς)pBHGjpHvu?XQTruK:R,M>vM: #?+G(Tl%9ns9n&R5 @B]?_+| SI;WmDNSxKG+%coWݏN7vm*Bк#)oSWhB@hSM;Sf&B"֙ot qoS(xʣ},3/D7Zb/,S '/ QPe>)n>fh~u\tQP05}Ż(4@L4T0)$'l%FVi+6:d |ξp*A щsi~<>]Jǝꯋ/.qT!_\̩?U.tRSEThn{0V2M102f`"PЬ}V}S(2㴔J}Z%Vu ^J3DȘ66SDw nпlc<e}$VT+k 9@B&b>p4$,ٵq2bƬɀ( .y9s Ppn1CD dQ02uׅ,֔<>` AWVhT6BFDuTrૠA)h\B`)|z12NźT+`YoUş'?nA$AS`٨*c9e܀PT 1Rn\΄ ;!  &P!8/1Mȕ6¦AE՚[P_HI=K}7J"答tQDa8 '-L>s0\]L\xfgߤMFY>7oyi8|w G7?T}BB~?~:c,wlJ>1\\ߟ&lglD_O:Mt\T-Fb(M娜._=#̇h1E7QUŠЩ2,8gH5*hQV9!zkuZX%D=0[C~ݗ}/1h+_\{|t.jP rycQ]Y@.Wݍܽ.JneNӎ]"pcЬxWV^L6w٤BL:Q7W!0|K0,@Y,Cp .K 3,.vWߍF/%ƻw75[P`qo|O,u 3)gB E"֩Y%Ĕ}Y:m;ŇE\M5tAǏf]VaQEdMJN5Ƌ翦*VԪ*(Gebdp۵ӸMcpwҷuƖm$`e֖brtdm9G!O*ĺ-?5$|~>O2^?pcڛ1ߘl\11@QeZ>j8+j^}4|zI NK8eGܛ"|rŽ9m}.zܼh0&Mnz~lB g?ReԲCen2gQ{:K9fu(D RΉKT(gs+"Yx>?K{s59YAed[go nKW~Ӆkhkkw×IH]PMzsiS cZ5s¹TALѵ>e?y41ʕ&(mYȷrbWh az$-싹͛wgtk;vm.Cp2U! *D5u7ng_Q!ps2Ơ<6w.CKjP#X@b]@HcUgW,ˋ. ]ѯ޸s`Kb1V e'[W繰Æ:`[x⢽Au['puɂwqxٷqepCZfd 4{w4{DT{RuugR9M""=cF28lTl*Em_.ܸo |s- W'{"C燷\S\SKgӍ&5y\BRyz!"友T5#,,p#$jfBԻ5+ű F9btaq"WJ1ƐĆf1GDԀ aO8U24!ņAr+<apt"FAq H#sU+dUd*UQtsO1m4QGe %T` $"eYP IJmT$z!f*̽"M_=yk`"PF(xWw(nLZJ;= 0ʻd^eN$?%6NkmA `j,RQFibHQ98 LDCn%s1{Rl```V5r fTJ,&5vŐ3k iҤ&S,RcFI$|ZDY#i#W9(ƥҜCQ#U>N>mUPQ qbf8`JVj),m5*13rb t[oڎL?7F0%eJr aX@v8W *rpժ5+IcK,$' %H *cH2% r)LiavZ ģ6 i1”F1LXaFȮu(Q}?MLg=<~ȥ1V]Y YƙKՓҞ=9dpZUL?_[coR>T媱FCoφeU%* oڛ\OcQSlayO> ͪ۽HN]/.ĆaEV<,w4H<|O9ۂ37=}䷨I +up7iLs $jdٻFcWI_Xsg%]&\Q!ulT)iD΅Cޅ%-[3_uUwUg0p5[1T<tM]^1]O_wUfS4*y !nBzM吅)Z8Qo1^0ğ`>¯|\ !0q4z5X$T!`gIWeH*x\uVJ z_;ڃEͰ77 WҲ>'G`l٭ _dN9xSwg{ii'DZ"I߬U%[}_Qg+d lV>i!֓5]Hg KWcܕN8[PS'o2JD)kG(kJh^~SΨ&Tl󞆹x@KMT%!6ħ{%h2ӗL U밞 STفvZ t?<@e.?W<uXdv9UZy÷kSւs5̶j;DkEVCY"4YnF$V?L3F;1dUůmT[Y ZY$1<+5KF SM%P*6[޹os@mNsOfA!<~49=s]dXtr>owff]3wfk'hK A``'Ͽ=|gL _x5LU%H ʵjƊڎˊŨkAZBKG^/:VEW}p暣Nq0 ]62~ܟL04oL5pi-T1,lTc?eHF !2 1b_L(T*aR7֙TkXR$nW(IMAl~>G3Sɛ<=3NWieI &υ6N>kyB'W<ɋ*wTY Db~ ,<^*5!lAVz,S`yC*sGvU;n*ӬAp dLw.Ӝ;?^N7݇/%cXa4X|lϪ/xa SH76Zgrb$^FuR&Id0~􍁳jX'jX#%Sh_.pN|ql@ݟ}oSm9{7B9DDӝ *R"8'SDFB]6!Coף_Ԡ5}C!&#ˇyB6X|X-?K9{|YKC٥u}[=dUP8W K~'n N7%>uCRmK8V=T*Bc 5,=R dHx`%RJ10[h`H2a0CԵ1ǒHEݙI\Nn^J|q(,.9muҴ#''4"&ֈh.TFF%%KR['q0)Co2Z(vTh5n%j}6s-p17 #M`KX0 J;ә ()ȉe} C@Xh7:ttD;mJ)a8"#Z&-q>X!U~n10l>ev7V*ϣ^ p&6^Zu58Tuv & L#7 Q+j<i ^NdjRF.͘v裊 8L`8cBb nި@D  Āaav nf//%O*N5 L ;hӤ5`5f&,*??j:ӹdXFQ0 ,6Tӫ;Om--/DK\* }D2@j\GRӤ$aZsscC@sqKĉ a@epHT8֞[7Fy#sX5%˸ tw%`M~KQL1!Mq9SE!A7 g О;$23usL4բ4K8($h)LԚԖ5US[taVD~wBի3ʉ t#\ &:_tV*:@:N,GU-:ᴠ8HKM pYѭЭ\t)V.ŢB#Y5㞣r|/;ef4C 5ᕤ3x\Wu1ҽp]@9; 4;݇lY0 쪧rHgCKy5frg zuzE:=k ^K@_ {; E5dG,cɷ9U5qJzQy Ftl [{RbKמ<)͡@CVY1ہ8vsVC9NZRw>A;hG)lH2m<흑4-Z(M+,zn3 9Ja+ԃ]B6wvuL4B}];Bv͛ޜbBP ܒݢz@tT_-OW_0r'PԢd( ΂Җ8bE􎇨Rk 2) cD O jTVUfZ+t+j_iT5¦gx.hèR[/R*FՄTȃ n #A5+űP\弊V숢P8G) Dׁ)@I`:c)` _ަ 'ȃH*Tk4rB1`Ar JhJtHA+;7 : Zw\jUutG-D,DVT`*J,+Tp^&UTĪ+P0&1VZ-31ѠDbʇ np !k<.)$s^(L|5b`)k,AɮY_}uW4ߢGnKmFO,i2\Ea5W TH/~W;EZ_'axǻϰ FF,ZG4B&w?.Dz JyJe#N(tXy9m҅܁e17$MVLۘ nx@HCrd?P<\ƟrV3L3=CEcYNNk396KDD>*M&&R\e]]fu/C0̦+3uW.̗5v:qoJ5yqHCT""qטNa~gPUDM[l(َ!o%+sgSpeB!J2f l[<];T+ASJyX(ԍΑތo/Ώɂ~;ψ7+Ȉzq\Yo,̪8VL"=jY-k '{g⿻p״?GN #s ;AY 0P%ڸ T+ީՍ# ZL 3iT TXU _:;؂wy W詯{޾8Kٕr>]w'Z&kֳos'1'Lk *.B@%p/?GGj)ZTĬu,Ox&bmy֯ P ]/Tމ~jK'h"`$b41Ig0O7F(Նo..}3{F7Ζߤ ?ԏf˔Kh?.ʹ?MFhiB~?L8%+&D'n} Z {º~5r ~ȗs1VsWgRǰ,rZ~_#n1:u/®[rڭrKњR.%%ϟ6$An%ArT*v{BsIPJw^S %Pc2盫MeO좛䣭O!.dr$Z06TlV5s;Dq8y͝c_-)HxP7xN|V\'_˶A-ݝo]V0ewzR#Hhp)@k 8v,-*Rξ_Es^T;Q"3BvɚD0tiYyX~Al]ي8b)Ŗ!VGMsj8;嘹#*(+29ť8vVQTœOvv"6qB .VnLu'sh '`R!.|z`fjhx': "K֣#z,`DRL tCKLDx 0j=lPɛt(ӳVi| ״ʠ&Y\LzD` G  ;N2eNKDiE)ͣjA2f`8ج~{Q둎+'(F2R%)%w&ys,x(eQ|¸i$b-4Td `9҉1nNo9BJh T!ljjQ(tʎ60U#<\cĻ<`Ļ0(1휨 ;oP!\+=F9Hxc=\NxKx-ʙ &[2*ncGsr(9 !Pxe,VF!J̃àOx0HjaSoG.RN(l,8LjhmD98,笈 ;M'yA7J/&p#F;$8 / !XhI3]Z^pNhx))[Gtgb:Yg4oѥm4ltc1sw=ɷR)I̢Z/նs-@k.D,S YZږ~Í|%@Q/D q-8YYq+)pBRU,͸ᒛH 2RI2͌OAF?7 \)h{WJeS* ?цUJdFd ;t9 *lv\'|ÎV%zS X Hj.l)13RX8F$ Ɉ^% $S,=g*-%pO4È& })"VL<+ލ>58Pٹ/V,6ww堥14@ { DNj`Ǽ+gRWjy9mS7<+{0Oy /V=eXe>~K l/~ '[ra>-Nn:{]=eP^ŰKE10^y$~}u8њ+*[n,w \ Utbk^Wʽ^ rM[+Fcego7nm3C&T3~)M.E(o^}\렧D| x 7y' 0v<~h@0 z>gm&Ow7R3Wctt /~9(#aJT{dH8F,8@Ehٝ;@@Y~"mjɸ6EkWsS@S)=s$aVRY*+WFuP<>oo,JU>, ŚND%:PAT 'JXNغrh5k, MVB|n_ܺ߁~^Uw~,JΜaHSil_Ѐ߬D%-6\{ڷ!<,%~~'3}>$AVf#-J@2-/>iJJP.'{иT6)~|*oiVh+bITThG6?Md8CXL}d\T<{#̭lJR5b\ i+vc *BFc F+4؃{BkYTb^)4gnZSzL~俢:gtL8x&]vͤ;u.Pֳ0~L,^TASqWS/fa/D؀SV r;`9o{ElCZ3ԩL5s+. UYaDxNE{bSfdd Tw97QFeE&M$Z7e*5dLzɸCQZ̭#DcާF2:) !RxT;mz%\N5fҶ]n"ʷ#m3ɾ)/tt5@fXw؝@/ ]I"]jYQb#9W\T1/aT1RŵR\J87q)vGiS\v׾l.x&%O.{<LM8i%CEn2?>t)0 PسXڱkѹcq~t'Goˁxɚi{S~iY(Eif^7ٽfWd GOkS;6ƥGؔ4)9m6BgV1QK0;D0L;LT]X _nsWwx֙jǪnb \7&1F JYf#!:h0rNQUYLD2RxdfRZ߹4M|N:jpY ړ܊.)/[DeR:o"Bq%X8)B`cy8Jjkj)$J֛Q~2I1i\Acҽ~&FF(x>笽*x>?4&KukߑA0DM5G!z"^f-]o[>#9\EJF?,gb{ԙyLTԲͻelCBsm$S*[5%4ڭ)9u[<&OUڭyjvkCBs=X0#3hk8'H^x.\"Q2phzg¦^NX}k^A NڪO^=Ig &Ëpfs"CdTUDfX$gx=5RrPqa alm>Q=;Zʎ*9J\bI{j!v19 KolisQ(UXҌQ)p8(%JAZ +A+qMHo !)ei$?i*O\n$I2vS~3IDy1cwT0Ҕ af{Lp4QrGF6 !Z Upp+U#k(zz`BHv;EpBS"}ŸXaAmhX_)%vn9VJVHS0KMEkAm䠯}b;b/|ZDFF5&r ,"Ό!=ޒk/\c̔`DT lwϝ-Ta;$#w UG|,{Z(KW FMN#K!T4;/$si!5؅/{(Ohb G-0)M0mu[z-s%;8Mτ\6}YSZ!z,Y8XQԔ,l[ܹ)eekܙCm| lN;OM[q3hS[RZفi ;J;;TD'v~ss "t`wtUwqv벂)><§V] o{*ÞTzgvʩ.``U˘_L렬}aX;v.?5Yksɟ_<<^Y tRU&jbj)E刣ߋOS›WGÛ1&Y38?M?wm=nJ/gn8yzXH߾EfuiPd+XUdU:ju|熮  &>d}2):gp$b*PSXs1z jqQDGp&q5qƏ4YY, ]⥒:/}ֈ׽4?bb~ ;c7w,`&ն[_4 <lf\5NT,n<6ʃ۲<('*i|"D`*OSc'!!To _۔LXű%S-9`?NyȖ݄Є<Ɵ L0-lo^c@+H9Lӛ.ޖ~N|-d⹓Јi( PQ-ךJ5CE)'@HJ7W*ϸz3ZA۔Z$*r)gMԅ̇܀X@E_Љ4~_769c%c$2L(εp "\8j " b&(ȠAg\+ Z~eGz = ꓵj4fA@+@ђ=qCʓ։~9T NX-?xU׸d=i}`q$^ 518t]J/jԋ@SE 7/m?57g\ JL*.~D*O,50iYf.k qKeiA}Jhar'~a&E'kVۙK""}z#3KMŔf a iB?*B+;3ฤ~:Jq$ǓAUVEq*lyj0|AĢ>x ZG )+Rup*y!Jq!ˬ YO3-u49jSFؙH} t5 g5IpHM &mkD0!τ>=m9]E,?8D.h*x*5Ōu/"SyI/'b69U(0KMF![@!E<ǹu*`y-;r21Ӳ*Yԡ䳮Ÿ1ΚIDӞ\Q:' (&m .O&:\u^٦x%QUqj+@'טV :o;83LsP dZ"8CAXZyRzهLؒLr^Ɓ|:u}6W1ƟsLYNIpOJLJ;g M݁BKz-M_qeǦpD'RٽBr%_j)< h-2N(x:Қm$;G\3k= l5hlZ`@ P VJqO$vH5)z8i袭, Q"0f#LK3Όp.4yQK_]J.eЃ>yUH`j6S 7%}&1q0-7egR9)/~xBi.ms`AY*E+1i'xM@Hsʞ@^I,/GcGSDLeZ⯈9fD[Yb#is5&;i# p#`:jAUx%kQZȓ墉9UbO9YR!)fof.~0ɇ<,3sja/ _cKmX}W,~]qBob!̗?7_[+Q[/}D-D- ,VVZ2r}Nz~K71H(C)RBwaQbe_Lޒn\Yҭ))Ku)YPhoҭDS[+/D^SJMI"D"5FT=45hQbj`7%\ޯ^ &S1؅>8  *pJ))I+ c@-ֽ>YQmVq%3&Dl'MHbɢaS`PHRsPf-#>ÞrpHn D*A:6X먑㊵6:)a!)P6z @7dU)4ZdQgi'2gAJ!ievw͋UAռkˇI؊H*Rmy{M<OY/į;`ɑ 'QOP%DTtoUkPMuy<\\v^iXY_)X_Iס^/3n⸜U|z9+ڢwn}nI4]u)*Zdﮒ KA{rW1+Y"Ҝ̞闇I4 (,q3".#N-آAaLݶ#Yyc u@d'cWvCFYQ9$p$qRO%8!X]eP\]%5ޟ!VI=nI=0np_ ׼-G= {-EdviwvA਍]mM]C<8"/%I'{߻h+G5VTXۉYv1U5(H0bj=ORwmWb ~i sr~>o""C8{-X/gz8_!8K?s^6p`o/'D7ރSMR2bszfHE;ș﫮ꮮOcf}ߜOq5w߹6鏋-p.#%ȖR ){wji$agf>~͇JNW(@l#I^&gVzLk󾑎:uyS瘩c~(C(9{zg*ReitscA |#B*%!zw1LE%#;9KtYaF!d1m.*R+ Ϫb҉r#+MdVS,80-#Z@MURaKwJ21~p+FYyL$#,bV9oj{'M_@Y&5 f^YU≁/p0_aȫ8]<(ʃ&w2_G#䱤ŶwT& !99{o?ւ]WͶҚktTIh*>#ֿ҈Ԯ \c hdOvLE{HOGV^&a'nV6L!5Tʗ)Y2%+_dz Uzje>JG"C%-|2Q ]ee*.rv֊ihq  dA(D2D/A/pxA|@H~6 *@1qdMGO-o gp1MaT lrW.+h&FW2 Kv1TalBnۈao-rCKշ7$^J#)n3Q"HZ4)j2WZR]t* mү׭ k1``7Aqe4fS{CLРr H` L50ЬW[=N{ SSf_1핉a:YWHACsge X)iF3Ll`j:Q&xƲeB5ͧDۢ,vN1C(;V\UrR-ƙl[J4Wb҉r#+^QL ~ŒXxZIP-'j0f#-[pDl1Lc7ogOYzŘ(ebں\X UZ _4sA!cו B&z.?SPÌV_t XV';$;R&]ON.Vxv%_U,vܥ(Fl2]u.ceᣤR$@(Ln/~>ƟKo9D"`}{dijyu7/""*}/|{ ?L o9Ђ<}{t;Sp91xJgk3DZPLןD!A9-pp{ J< "%`ϓwDl!n{r( 4՚l&JЍr':H&u0i4n|yI[i7g_^,^}FzŻs>] fyE|<֌u  ^j]+*/|_Lj~#r ,op-%Iنbcuu~rb$ّ֤ǕNy7rGWc_3a 9U;rQ;  ! ɇ<}y3j}r*{|>9 Ζ1QR-G m_.VXLj)A7L_WΓ6oE B[n*-aΧAi[h 6;Wm>蜯׀x)Ђ~YʻK~Kp,| K}q:껗iە' L L>u.]8y>l)"PlZF6>.-K>݆Q[ 1"لKB#m>fo= L/db-#˝}odAG4}]!}E¦lc+Ñ蝟.v99[R&JKTf#S|m7&Yc8-yC=|UGVIi[6v+ {߽* `GOH4R@mǡKR;Lݩk?=6v1pm<ۆsΏa\)lqH[E9$9CrX}Gٜzpq s%x:۾T ݗzrP0PA(bd\6#F5v C[-$ESf>sҞ^>0[l!vz ;&0m+$\ts5h.iGPu:lU1oFaψA3 2~(QEʛGіma`:XHћZ32U9rjS(ea$Mk%1R-JS4X07ьPjݹU(wݫQ)He!E Ơ5ZGu98NegAJ!)Q{ >ah_eMۻ>6}oz ţ+&}r0E*^&Rb ar=, < Q]0@+Xb+^kn ~ꅭɸV)`egy0@i8uÒᠰXT>1X"%2U-VHjB8TdMAlsPCNq]ΏZvzL$I$ZO4aള\ 0븓:$ VkXxtxn0h߯ lV6a<qOuV,U7Egb%x&I˛p3t]l۟2L&W<־\M2TpJd% ƌ@g\Lw*^/&Ev]V8'~'M5뒈qSo؎#uPQζ0D g+(]t,"APɆL,(H5xbg ަ HsOBiftgf$YS>N.U/p\#*uo5:$MWw1ίa|o|w _a:??ʷQaqZU 1RTSj@z!bV<󆜶L]MSD):'=%x4gCIlEλZ鐙* b 6/,]=E_Rz~Z2?.w"wIUYl)l:,T JJ\j+C?u[c`ƺ'OW:퉎w<ɕ*@rW}rZЁ{\O;90ޛzڮ݆8;oYm gZ7"e7X,Nf0d1h62y$)٢-jI+[nտ .r*yFZI {v3Y ~B9d-~Βmv ]qEha x>Q sIg Ԓ!sLa!ͤ22-wV!(Vac +\׷`nh{LZ_2d(sC`[g8Kr9W!vA&ΔGg78-3L*|BQM);=ti]KUy}<*$7//]@vwtP|t$hp9(cToQuf!fMೂΙYN G{]\Ö=GdB6ovvfFPkNZ=|ZDM}4$L`^ R,.n2 7_3hq|=N Ă+O낂pL܉2+䡠'&QPpn8?`nq{43 ["3J1FR3Nfx eq^'Ȍ {;x+OZywsFBGs [aq~2[3n6stXJRFFdZ C2,SRLIB3q )AZi d$S\ Eۉ%~,AvޮH%(&ɜ@ԲTafY"iePbsm20uT8'[&ѳDf 67!n&ة4K1hDm#RaHmFKa8r MgeDs%;ظ\ _vD3(JЛic P%: !FNl~)dvTkHhf]8aD`Ve"zv1Sh`yVL2;YxJo1w>%PH:l@(#IA$N%Ed\SpƸݢ9iI2JEF,$1N-60[aH)ҘXa"d]J% n,^NMfM>,sa~=vfyw;nQF6,'3;իԃTq%l*WT&[a97w+t5B| fDQ-2Xԉ !cHjPVhfP8R2LN Cue]Q9҄c$HZQ!yaJR*FBSG[#Iҭ1clg5ɬ&%%uk q&EF'6T"AJ F:r鲔hS0SJAKKqw*د3wyJѯFһiUs<,'J!/O 鯽eannO"bp/G_^ lXqx<\_0wЫwfa4ͯÿ3]n2 V#]{psw>W  >~$0% r*SD_TjRC(̕9ɐUx6.'*C.->QQ *,> JIp:!Lap1Hǒ2)#_|ZS }~?иCiD<MA ^hw0t ,5UeX AĹa84(T(Ixe`!D7{nŲ>YxG^ >3N ^e3#^gٙ,WJBH8SֱLT1 $=ÀAǬD:( f%Ngn vp2. +xg XC\`bln3<ʢC`^ˑCU&/9e4`K[Z FS1zj3ry* Dae6(Q(cT)L= 6(Uoڴ`|j2][Pp `O5% 1vJ x+T{8L*K^4D * f!M3lff%li/3v7894B$ K3iIDɄȌ̦r-s DIҔ811P1) PUҰz>7l̫|V!*T]"O"h[qaq^\tãdiJqw?+$Ҡ>TxD?/.ڢr&^wn\-#F~a? XQ3ɣ`u~X߭'7f6ݢxI \}npneK/< @~{F|>P 6V +D$R=vuBQv5)5&1w&ӗR0:K=7Y! {^Jb#^'T®BA kıEy{#=1sq"TJXT(hEar`Q!4qww a| ξ@9+š d{tpzi^M6]p3,jpn={`VXc:#d)#Z8HVG0VZ5WFsM-oZ t1'3;  |;_cyO}~i0fӛ4Q5u'gEIȈv,|էTa]4ë\[D_*֦Z)Rt\~xion6ʾ9ͦc?ۨsA'"/xR-dK32)<.O mg)E^20ZE#u#ȃp&5Ci,K>'QZQbyҥv9Fvf'[|$~.~;3nnwa>]o;. ՟aľcjx:a/7SZ屢 u3-f~A|<='/cD#ҳ%q I (ڍNb1wTn 2B;n#ڭ y"%SϾoj7Y(9GAľv_i!8mݚ.Q2Qo6ۣ =jX BD'Unδ[|fvkBB^FT!@j*xFJZ=џ?!С;kr%0E#՜㣟p] \dG?'4 '0p^'bO8 ؈ }Ohv%}σx1'&VlQ'2'&W u'b壟pX \'?Aqv4窝k5k c&;fw¨ OB}e<_z"x ߰=$z*nmi*-hO߻ůPPRQF4rwuPbA >@nhS0+2nHr1ס75 ;l6_U:%2Dž*mq-&OfhxL|a ƚmu@)FzuA¤"ʥTo)\^4g?WqyfiNG7Xz&"%1\v#N{_Og?`%:V \)i}.̬ I-jlؔ Yfrs7 MkJwVȤ67+Y_1h6{܄,Aloi)MT Uc%:T~:x} \pL "W+{Qqd=^BP G1HKB[C:P. sF%ʾGZۢ9H+Ҟx+~"_w אsޢp;3F?Cx *lk[`îmJ{_ƙӫy fGPioE^qѷyg A/X PgǓ:L]7 mħ83~oۛ-_%C~ŀa;k2y[OowW-S<~731쑊w䲕S%C;[ qq9a]>u^} 39daHꬉ3J|tzT*j7lѝ/o[ ]<58C0)ԎiIջ٪޹JsJa*1VP|*mUqL*|<>yF3|mбgDÝNv=ro횄E:K2h߷d`+r9 HDFVQcePhLZi!G NH^eZGր`\H8%JpM@LzUeh~;LYI"Z-)!8jÚH98ʼnI"-S؉$v#j,ԉ'5SC< (/t&{,wײLZ`WX{(v1'ޡ{EV 6Cp"4` Yj4K41T)-N0rXQB;4 [8fy"֊ 61EP EB*$D &JQae ͸V咚>HORX?O"V|( ,,[{79sk66֞T9Ņ&7fblI7PAo >^п.{t_>yue"2?4> OPL_=\>mLwm͍:2{V;*zzaLMyD-2ӗuȶPr'=S}e ^OxAC(Y /dK&7Ӫ2,y[O~T#Xѷc6& CD(n9Gdz*T xF@B^)Foa7i;G6 O%+طvk]n]H+eJxԲ2:X=a+Eq"]s>./)|5&5/oO'%*YDhZ": Ў3E|-1,Y.HkmGm$ZvЀGKIm٤4wg+2w-?M'>/l7+ 4k{TOоB7 VSRe$%N\ۊ'F1TtV oYY_d -`dot]#X_JjVoa]T}߷Nԍ/ޝeJ`|]}^;FMC(!7lʹa7 bAjծdo֔knXX++ch AzN"=\%$ Y5Ms5q ˢa!rcR ä|vOL| b' Q>$SOUZ@6gL{@B& "㪰)C2+ѐ ty^r>K߱|-kv/m'z7Y;H&ix'y9 Or21s[O~Zc`.)˃ B"Jrp7,I(^0-./οUz<竹75FJD'ͰU|,p݊S<ߛd;3ڨuOƇ4jkMBl]8ohUi-!6P6{ﵤW(˧g`=ՒnQŌ:x;u@S&,0qBC2BMe>^nC 11(ynWbu}Qo=Tr8 :pϑf̔JY! VRɼRym3:u`p*C\)(0d8}w.ѪMOX|3ڼTU,Nlp"oҖn%uRW cJ+ jӳÊa}իiYx6niAVQ߆cѬ\^>X`MXwiB3H4s@ß_?j6 (4͕K_ Ɏ9V"1&v<91GIqA:S}p:<( uMGĨ;Q3c𪴏r_z2E7˺" V2ܕ> /" -{ofTL+01Ɉ xv4W3m-|S42dqv*9xLOpidd Q"ySfiˇ:z향_#f[If3C 3Wh G??zK G4?CT~ oҢ20:lDL]Rz8Ơ/WW=ZT`rۥ{':͆D21*5 3x yoj&0&1 K$MsL=Mۦv5ǚ-NŮ ۮiꌔ'JICc I+l* z7Pс 9 mxqCv ~Ή8!G`< fl vD +][(c6Bj(kx6f&5R24BjkxGu3v?3Ryu0L-JBgIW:¥vBiATk˘5OpNB KI0}Ԅ.  }V=\R}K0?)(%;|kww+T!@*@\tQgWӨflk [?!PIo5m2!+%b^dNCTzևٜU|49G?}9 )$SEȌS.4TDt(1DuϹ}r_~y>I;VAQCzU}f#FZ{u̒AOrwmJz9'8;43 n䜗d{e$do%ۭvwK-[ 3Z$bXZ"k!7Azv}1K#8e,A#HHtR3 W$$Hch¤A8#dN5}Km)+_Uԕ2%yWnFaA0IZwQ_wlI"+uUJdj)=i̅4|O8!mk,S%.j8",|4gI0T09(گDIMD:HItt WʳS[~T*ih;{Sa$fdO}JDS>ȯnSleɻ˙h .tY)C:n6Rf%DB[@S2st*N[q7T?oh$g񧄔4%6wy.Jw) q)J>ۊz2(;"kJt'ЧdjFi-o$$"42N$6Tc))gu:rZ3פk<A>Bv&u:-[V'@1sx*M Z9¥ IePC,YrِP(Ws'uJ%>^P/D|FIR0JW;w{K{}6^!q؅YCd?wŎVj;Z#f@ 9\JDŽ͍*O 3$՚erÅҌQ |*?'>LCxE;՝inw J lȭsCiүמ|'ʱe>ǟ?3bߚŗ/ܬ}?>O ?y+q8_ +ԁ*}Q]aF ͗}b3c.嵪Rٟ/~XZ]qOfb5/{Y.GO⤫BEQ vQɢ2Frѡ4QL4QL0'y:>76YT368e +Ƶ"dDNNbG`G|4cﱲYӒ޻ŌQŌQ9c^Cr u#2js C WwZ{kYՌAO|*,q^L Byf$d%FADrsIk*p)Rǹ!aIXAMJ#V9JP>y 2s`sQ|Ȭ#.3J8#2 eID]¹@~\Gq~J=B΋߹K%XM&|\ඵ& o/E!yQ%y7x;"{ĐEC[eNkSp\#&&q(146^.M)[A,|_E:ZWL*Њ˿\{J6Q@%s5'Y4LjK`24hT/ tnf *56)]sXvcDxrS-tTei۠#{1.^CJ*%EHA*Lh|e :qz9\]pLyE4(,hEBeZie4pUƔ: b(s4$&(4߸uZZPf'TVRU8W7n2G1z=FmT.QN3&>{k&8it G ,FL M%kH6nF,vA kuXLREXlMcl3)gZY Bh&)PSEc^f;*lR#sݭǞ5$\ =^؈lӜTCJ}2.KE [('Un:n&v"*pBH"xU{w۠ yEOD!G{`#s^"s>~O- kfWnՅ u[ \6Xy(δj(~(;P`x(d5Q5?2k(xZZVC>sWTp}%_ԝcBqpfůŸټ3zZ={zXBG\+TNGU ڱg.üDEh!y,ZtQujV֩;Gv^{nҭ y,Z= ;/'a2?Kl!Kgi7pmd9v(4&&p3Vb IT{v.4@$>7)}X},VRJ_Ǔ9ZDj0,UC^aeа_CjIB7ʻY^M2\2F2l3#hϛǺKd[X'¿XwqI]_Gc#!)s['WAT&)`F fbtU1~wۻ~.n`/VoE식]gH-~B4k9"@֐Ua3_5A.ktXÞgn UTu+WЦ$s+MM_¨(hG?L"fHb{]uÕ Mbb\UeS}YkYkYkYkTf>΃`1iMrFD.{Jid۫ɥ᠍VL>5'ק!IJ2(.Ut&E$0p;B*g`M ev$i9wpσ47^xUB-u2ݒL{2g1)|)y1҆JT:+:55;:PUunXX6LRvҝKѼVUvn {( Zq}y匉kc-$#,C0>׹ I=,^:paHZ㭦xZW8߮iʏΫ9Cq r4z[ʃa@!`q'dw\0pB=pV/=Ptc",gym8P0/Mі/~ sd%CF )sZWo {mwB/o>=Վ"/a:֊vŹɫZ2jDےE*Ey>/ڬXLjŲIm|qZjIhKIƵN@<-yɔ&iěxvժ YU Z|}Md wmq~98 ZbHu6y8YjȒ">'?řѨc ؒ,~Uɪ3A1Bz08$Jeg>? odj똾!wz *3*UNCQL0 eo40'L.m'rX:Edκ("a#U`GD DFcTcFpsͨP hst#cEe#>_&rdaLO1_ +?LRq#=TQp@dBȺ.̭Ly@=lޥGJ2*xa2X VfndW7_qr1J=74"qFZ(8@4&F97:HZ*WYu2fj*9[XML4F(٭һ7kJh:2g-6ce+Ǚ+ƎW2zm1ѳFoF.G&x5]R_5;:6.AؤϥZhxw{=gsbRR9V1߈}FC39=X,9YH=Nf hMxpZ l*}k;-y%#I> FZwKir2+˷^Ƭp Ia5e_j&&Ѻ _ëP=oY!ip|8L{ x&9bB88r`p$BRîAQ HbB  6{Olj&8am@+ k&`ŋ3q05rg~(VojKD;SnrS`փICU,_%P| EfRz0(!>3}_?p%?~IͶI'8b>`v^I%DO I s̙?}wm{0(:|&B08_ ?d m/e2>bIe'{9צA0jsAd2G_rFtI@~N_WW?;gi$0]55WW?=ܿW7.>9?}ON8:g:pZ_?I-R1}?gȸtS4OϝH#:oc?Z!*&t6(;9n敹~*4-ŻK{ c)Cш-n{@1 yG 7!s2NЈ-싡Zej~jp9.(#6Y7=hhMfd1:-ľ-c[=h=]':=cWok6h8YomS8pxͻ`[4Ŭ$h)O ͥ'.#^+뷒|"Z"S՝v+vNCGՃ4@DqJ<mpzPP{%U1޽_)ݧVZS?4hJ VH=ow1|KsϾ`eY_Mb-M[5Kx7{d Xu՘tfquRm?HVH\ָY5-Lj+0g:E"mx/FMk>u(fr{{_-zf 6&ULG30MkZ[OԦp'5B[V̈́;:;QF:E lp<g|avjnNLK}FZz $5uQD^C3PnR]@4R&VPA] |ݨ5AҬY8TV6m@Ѭ-!>5pcϮd`hImðƢ#(*SVUsOt :ՒI6 R rvJzm%)9XipOk EHZu/VJ11hmW&[aLVCBq-)PM[9Ovdka'p VČ]Z*D~a*= kb)%xjz]-ݳoa=أTZ7:8QFJy+bKM#vp)|ןB ;*T#{i yMo.+ HӶؒ3H7[*n%9eŏeq4\v&3vXI;EAFxXF2xbFΫQc4q2ʪfbg7CIۋe/&D;qj+ĥcl(OnNj)<[̘8\xv#zuRc;9eN@0-dz<]ԯmY3qM\o)@0;kƋYuw+a^F 5o,UB#6RՁl08%bPXsG IOu`Io:d%\Uo% n'N:4AuͣmiGSCvvRm@g$nNKr`bΩlG-OT&JmI8rHkCrH)\90'ƇyxKB@"r8*Pl8 1-ܰt1D6o?jugZ^GOAg3=뵑}p}ƆOI^${EҾmxmT0\y)ƄHZdL qF #\9`?|> }7Rt7g Iaߕ/|H))[>sLP.цRqĞ2e%!aC!V4ȝVKt"(|o9X׸sOa&'")﯃Itwv{,`N6tY ć_|5) ]fMvxQo`/͗kho?"ۛ3X}W) a9AwЫY,ެZ߹ڀj6:"'S&ݷG_^ `H;T۩U`y ĝKOu I^yitxSf5b6[; Z!"4Ku^m|mDM^#%am'#߱,U:Q$t|):^``~h@lk~}i2e当ʕkZF9RYg협!njA"PgHo_[4u =2L+LT׸ jʥ=/ ߂:W»2׏pܚ%rxF3?fn:>]oF/LW2= ɇܷuܰo$܇uvo l4baGxӐ[s10nݪCQݣO- /VxS%9Vhݺw.֓JP3b$ ) 7β4JAqh{+W;- =".Zꏃw5c!2?[saNsNq)&,?^N^϶:AS$m#BE@òso36&.sי2aKB:Q$)7)IU)"TauB ^$;I:!3h-cIQj' Vz̜;m \B:CXC*W\1X{ )rb7@[}IlE!; /Z# X8`-FX*"/ .&*JfsuALp.f:i,w< -c6}㔞PkJ F`*i,U.hZe=6#i6nY$N-?k&3Me{<~ՄEB|&ot1 une4L@l!`VQ.h%)4Z^׵Jp*k@5]ñ.d@}灤v EtrD;xZZHnmH3,x] [.I c3n'<#vhւpςfs/saQFkxS wzW˃˻4` gw/Co`\CDq~F |܀~5}}q;BU[,?oV~j˯ Gh"Xۣ/wE<=z=|ź>%yFF m`rZCSzW0 RS)`٨vbƬ}V?A1^m5ͳd.Eh3V&IɤGP9 bRE«,#l\[0PM0)v]\?*4/VPL;0Y h4üAY ꂗΪ tLߖF)sX:ɕl_d 1ba$Z$ *`&5{ vXZP_!R#) صE"Zb"a=*&8A,tA f SI=! X*L}ZQzp营 ɻR9jUo*L!6rɹXƌ&1"{D8,`qIOL`wN{-TH0ՎD!Δ(h#]w`5t{csI.^o{6VhNKJVM`o1Ԛ)iua:щޯb^%.Cݓ|@vӉՊ^[|"!ۛӸܿ>p# 0vCΝtI1l d*|~o7l5,Vk5W}Tp2P.OGh &JQEM#w''] qOR^H&}Q|O7WrA wB+ͰB FX'K@ltf!̭Sh 9嘶RUqK^ckX4V#f:0峣 $2(kv% rF+\j&#}J DZTT7kurCu+5%)jҝG͎M1^.NB)K|`#&8lfgQ#Uq2wP-+'t /MEO8 mH3ѣeJQ11׵IgNK73$?ayM8J0? %π4)`JS1HʪtAi hE.ކ*[ѽ{vtS=d~ÅޛpTp\q舣G΄LuՇbzrKRI谸4`NnYd:UȇuaXW3B}PPHT:ZЌ^|LڡU=>B%OIk ՝r+;A(o{ |<]MB5醷aѝy8XE6<]m;OmiB5a܈hRpIyr+;t5%Τb=geيib +8WSBBK;<1 Mv X9u}`W9LyDm֕ .rDv?.>i|CMS'qC)0+h9OX=nnN]+I W# CTZ"JX\12*j4 XxN:fq+pX(%HPcQKϽrRcSKw͍Hv"_\5f&{Ke/I@hזI$~)K;(Lb!O7J0yD'^g},,W  ̊!&b$CE$5L!*9gw- պKʘ|2ژgY:ټ٫FT;U͸MՌ_j25盞*\A:Mpn좢βWsx->{Q}RӞg !>_3V%ꪥ;%: S;qg6wKT[#C/RVJc/[00t& Dh_rJfwh/a%dܡx|[, * i$Z VW 9jdEoЋHQ@i&(͵ [57a-g "8̴p,pа ;6!?c#t[ åg lq m.aF3ir3j&pEnO#Tcɶ ,ȵ:J[JKXmxIMJx5*m1Cr!q7I*>~R$͛}_[ b`OPOFonՍogb|tQ b99;eXOH3{Nң.cT3*@Æ$TK1*ɘ8A5XaaN9 \Fqr- 00nơ'H˩DKwF.Sȹy5-12wHeXhCҙupP*ciysVswlNh-)VBLSa}DZ)pvHqkD6 ĆB*}rA^=A.WÛgm vۢ$)ޅvKwܔV.E^sp)ͱk[nqhhT]5֐vο,'><οhq)C+Lijcژ"Z1}VV+7 cCagïIqq=TwIA%I!tz7>>㰞"ikNjkuEZ=mJ[0ly=c_Ѵ\<#e9lo"LOGb8Jp|*i')O@2L B't{ ?Ù'՝hh&|Rjx Ws=?!u4$"8) m&uueZc @%)oWW;/\;Z"#x\N9KLp NyGo@}i=D7"WӾ_!-r(&aEZGNDǚA3(AIomM{*kAP|[4hQ¹8n'g֘xZ~x n@.%S9 p8`Vj42ȉo Nvo]2o>L>ۥMGH-lΙ2TT7?[۾\m[35 KZ,]RsQZly9`qkz0û1Dc?ƢcӥhUH >R2ɱ Lh;kBQ=G8]ja1հÖԴODS!.`qR``t"Ge2音LpL8Tğ1fw_(~RrB7fbgů\r9'zV/9dƇ]5+t>g4S EIIq%#kJYQv)ߝWTY Y+7(<8vۻIw tbQǻ8ԙw?jݚWnQ6Uj̄ѻbb:Ϩz̊NEw['@քr)RRͧ<A>wFBxmݚWn6m]C)3\,{>3E9§aK&QڤnXݕ<.cnzYc&ib3[!*`/}X1. }GIl6?\Y)d1:eKI T򽾔T%ЋAz9MmUJ'}emr=/_{s2~j SCc+0Jqov,&,wkl#]cngtɡ^b蒎JE~o"L2x{&fOc Q&?1\Iwʟ`jݟc쳎!ގ+3'EH@-K|1BJq#7S~\H*)xHr h(C<&$j r ?;5?:AEz8'HxEu89~Y;E\th7à?q]Zƿ2]\^,v@TZbMD&;{Ɇ$(/N6"UG?~Lmna̜f0TTqVXZuW0ޚPm3a,`_ clؿ|3x YI~ (-?d jb2&cPop@[\Khk^M#h.meL}U`Te[ޥ|5p"@{z?p@0=8xoŻC C|Ɋ1%`C^;Nk~/e5A 7 %؃XTGa'$xk^_|n`׻{dw/)TQ?zYM܏7GUX, '(*":R[x Z5 $@ <=M[JiJcgO-$ mfW" DFVISӛ40ʧVm ̺[QA=KAw7*P&0w0u842x2cyuZ-FSh ;?-?;54$%52G; 5G 7ѴTxD.DZڒ0-YomαQ54$$<(WXkP`|GP5hÅFRoB'6A 6r (ܺV&/ETj[ń[D'5VAٝN1QEۇV=\C,ቮ:bk0 J:2IX޺7B')~0fZR2~#?p7^9$E12@l(rȇDS8<χJ(¡n_8R'4k?:J|a2}z `J}ςl/O9ӿ0N- '<>+* \< 3 y(5R1p`8^6Knk{'AMeOV?}u<áױ.GFHMҚzu 3H- 4UkCQM"@BDF3[ƇjHCes1Zer<v6C-l1f /~9=Hrr@>4m‹ALP I0bY gafQ$v.r{f)\h]F]Q7)Sd[uTi$v.0;E$ӯ5C;v [D&Q*r3|a<%CsA!^gXb]?vkF5ן|룽 h׳{g5mlx U[_U[_VwdÀZg~҄\,F@sN( T#i4ʍ0Vf HD }>X߀?ol|46~{bt3XT/ne?zӴǼx`VeVD?tn_GtQY_ |_o*>-~EE ZKJՆqF ”C+%4Pi'.qkDm4[#I׀ݎ2BgZ^H~Hʷ\}&L\y~#]!Ms SC3@IkBsd+Y$%!v/s,"3޹!]J >]jɯh'lT,?7IieB`sk`K>)q`r2*2.ƒK>]dy mN! ,әhfXIi}ؔEn+1v碗{eSh}j.. 0DA0XukoCoijn|u?o+:wexRF]ʈVת>0%;RGɼJ2W\.hK0\p6YBf8n|rI2Gr7s C%JOcf%'0EZH%F!i<v˃(] uYco̿p`L ƓEcU69ZW?_lVP{_yˊ_>ܿЍ8 _FyJ*t^v)٭oiE[K]odUOGÉKR<7>/'_TY&c Y+7"&ny7LHbb:Ϩz,;H*Ym{ nMX+7 6%s4@&\(?},,3#`8$C 1*lnXgK 4e^j_ 0%xW"š'Z|}1ĐNb)L%  43%VLde Kƈ Gh VC 2@! j5E˓Ij̾P $@-ñuZQEZqqP =bf5RBjBYȼ8Aִ>^׭j+$zQOp 2pPa-'G*xk/ P?g.~ɢ # wrF !O6T&Q 6DqJ,;76ec{IGnV?x?Я̔$Qڎ}BKBC5QC).| k2ܹ A=X{%[>}÷|ְ~莁+PeF_G)q{E'_1yT} qS 1OhJ@9ԎPSbޣRp7#c[ANQ-oFn"S,ÐQbpܴd@1(3$`Jr޵9bK1t9BМbs$^\+W[8iej^jOg?]e/~JI!&e6bڝ8ni:0  K>}Uj3_>$hJhwzj-ځs?hs|jPLd[A{:z>.G|H;^$~8XC=1fuq~6KrR:W/00P@G"fvBZiAhh{(m]|? X+Glei{M& X\y-BzwQ%<"0>h^i% b2o882sDLK*xeܯ=`gs2^.^J)PZ't FؒR?+=z/n#Ixܺ@Ȩ跨~+ 9Y( 6UJyi^R4/M뾲FJn5(,q +*9N|EAuTҀsn|@|L*/iIUnr8 3e/fVx>6Ś@*d;|}E [Kft׽z\ SBttWH~[y}GVz5&m]߹|}Q09xHHw;f }amt>\6 o76Uo}顋7Mmc1GSZbNBJ>Mo[7Mx`TK[=VaB+ZP ԜTikTឦp 0p]IۗF4]};<ä&$`q6lb ޠ(6ALCAQ1wSIe#k3Sb֢*|_H8K{sHYfpFM NȒ3-jiRLrIcl; F ]٭wv@=%y+q9ckEf8 (RˈT40J;PnYY0P"(޵S.fBQq[ eT" CԆʦ) I/PP[* H.[?{7 w$5뻳K{e?"84?ڋ[݄tX$XM-q0K*؋ۏ~/矯.qW]Mj*ytP[OGA7)r |Lza5g)%) o͠~Kw?twO99\i7aƻz_v:@޹#<:SgVJK8D#TܘgV C8Y{=ip;FG!Ui!XTYfcrۓd˥R7ON9ӹGg2j WDMVu0>c*Afғ6JɏR&L~,8m9g-)m34CFjQCa#Dl$wf7(̂<(w">tuF$>ON3ڃ|&ݦ47^/:8LdeiLh^d<$srgX,{"Z_3-q2E 3}Z D(UM>+b{F 6b:2ĥˏBǠِ=e7,fGeA=>{f'"K9e>"|PggZBۦ;zN677wtB (!Cnb 8d#Ҕ,Y:c)BP4[Ώ@i% !x gZf(iɹ0On܈=W8Bz)PBB- ^pd}"AZl3sMPt2pIA\BQZ`<"w` :73bxԲB{O<+  OrԂY#Z0jPߖZ&ِALx;d6L8|PِDf6a!߹v)I4n׭(e}i$&Inlڞ5%?3ՔTB?`6aPwd uAjۥiν_â2*!|8?3痷D6,XY`]guZ?!9y0, ΥFBwp .X!DlY1|~ ^i"E"|"!_G_1yΦKl+嘹%&l#[1R[)STXg)Xϝe.ÞJ y)5LjNs't!Ge(bK%J#0CWThz֥ _n `\Kc!l05k} >63][6+yٓݒeaa79- %fP%Q@υ"_FX"{W­ZPp.aRHZN;09sC 0C'0iO&,%\bE PU Ohϼ &aa^0<`_=l_//łϹ6!]@0%+n*)Q uE"\ūo`dk-Dj߸T5;Vy41 ESz)*-S \[I&߫u1F\ЈiOJiN&5%[Do>&!PI}>)΄Q/hSL}(kK-TD#z|*XE4UBg( orƦ;l }` ķ2Jq>f4fn|2H ^Ygzi}gr&ɼeog^{5|;^Ekfdɂ WH%K> W߿hKDU|k]^okU&ykk {!}`N,/1 "2\R|*Z{]|U_91T,Pl6zTE+pm58gaHVa|AǴU~Fy'mD-;ڬ\b&PLHQ;ڻʃڬseVJѼSJ"9&ћMpzoٮ5g<竸HJlPmTwJmi{g:+|M:+?{I [9uvҿ)w:@T\H^_g"G.|&u efHQ8gLt2yʖJ~h%KQeN]8b4/'p$3hAQ={pz׾|߹K^Rr-vlZΪ#^Q/ \7n뿨bT֬#T1۳SY}Gqu\!t0 'B? ^.pQ3JxNj|g9V?SQb)>kiG xx&H"NO3{(sf('2MID hH3ΔXC8՗Zb\jzڨr؜-޻qJ00wv_N֫5& !3'LQ#UJtX$N:(M`ES4_&[nkS?7h[꿜ݮs;@`F$ؐPZp%"vI|W(; 3"W"Lu8 J azPEH՝R(RQߨ[]ݹ682EIԝb j u4H:NO5q=NGXջFJPBGe(IF)s"1ε;m;5RD(@ٝꝪz)T;J9¿2;woO-[ϸ$J;&+ ; J  bRf 'dKPD)V*S.~:V a/o<,<ɉ !] ^tzCKؼQ XqN1#dоYaY$hK ivzWFer{xAZBx,~|vyvekJ$bێQ Gy/#cU'Xd1 w}yL*rf+`iK<]n~O7m֯ilXk"&Xl-NZqzljE;Uqph`$6_[.A;h mk.X#,#Mh;p]CN)4ev:"4u7K*[zww%Qi_&p^PWr⾙dj]> gO Q`n'v[iv/0{ՑXE1%܍tDuY/Sѧ!rQ>cUhӁ7$gD䈳hkmRq,DŽ+Z/I[,OW,O!TgoA,r|r|ֺ(BD K)GE.{F᥇+N)jezxv텨K ҇ e( SͩP$&8;iZ.D ]1F</C3s޵Cu ` P\.C!5W]2(Zc`d_R'uHJ^AʂNS-89yhN^GxNq*^Q,jRW8rWV$JdW0BBN Tvn?eC9>zSN#GomhQQBg~-`@Z" P+ +)B(%~%{W8ҟzWgU X NHp-TN|= !X2WSSѓJ)51,XQgGG$׶ˍo9ځRTkտKƓxvf):,ú޺\ñ xpr擱6/̾C:1V n}tcyoga+f`r-T$" IEanLS˽͇;lfP>f駁~.!4}"Y,R˙0`Ӣx``J:^< YzN oF%ʋa70®D2R+ĉbjj9Xh%2gFit&9jg72P ۱wR>5f{qO( _*p jcG0"c[T\RgS,{#ּch$Vs'g?ڥ*]-bnOff g-L`_c4Ç_V_55na/;Wc [2 yaܮ˙<.Owfa=1BBL?Y L 4ؾI<t˹U>1w/FΛ}W^^x 7ߺ!s @yxj6G\ ƇZZdDauѤsKo*/'Vktm].X0eV߭~b'6ބu02Hgsk6^y=j_\Az%G.ϑIr,yQ(WZ~1҉ЊJiv +!0 s 2E)- ڔ8® ~xO A-0s l asCB|mж?.[X>}t-S,qG n} [\>f^QcܔXo,sغc A Ǝ|0uoo;@[$/%y\C~9띀[,ͧF`DcGm@VT i-$56s̨v^.$HHLbP݁]L%,Mv pLa N\fMM଍"їaq%qcچ]V"‘-B8-`z۝jDLhaBBdk;"#]7BXWͩW|1,XӟL'wcy ހI:ec7ߺ6v3l(g6WRJwPg //kTr(4HI ݕޗe_ŁSȔ]Lr͆J3̩ц&ê<W:Yl|dգ^Eu@) x*`X `uviDIܹΊ0P(v A%o@=~gϡ {ό8c.zfq2ypW{w5IN7ہ5uN#2TB3!)'*f.;"H#ibߑ)n$1Jo@(w/<3>BA&6bG.Ql5p959EB')\~C 9ײ#Jgzʑ_1ePf"@?m74fvegqĖKNߢd˲,G>Gӈ؊}.u%3${5?Kٟ[ȰS۫×VOڨ:lnuӕt,Ȼ()cƑ\R£=G'Hmj8< K>ĸN5P+8=~r?8(->1 DРǐMr9ZŎwǗqǂ=njmɃ&60G{RT} #si8竭tN>V%Azs4ܯ׮I2'!FK1Aj5{˿l1ԢZWR9d3Me 3hddu[>&FvNڞыKRc@i FG =Hl 阶C׵Q6pP}lLc;^tZ˽1 =>@o?O]f\oл5^@'/㣜q\O(~A TW좳J[k;#_-]ԥ%fB,TT\I]Q Z dUUyc4 :fhJےBl"b8ڮsӳq7mmjkN?OVlJ8:akNQoEK<&c ِog9lQiGG6'-}$cFAN}/J>tBQuNRM q*QoդrVdDv+}(Le/8%Cm2v~{fPOPЏ2薙E`*UH-য়,P|X3ubJϿX f _[k)~uW.~jFзPy$GYNx@p*p1e"z^eIK,-={%5]_޾urW; q g| 8{Y[eTI.,;G<6C+O*cL020( n0S{D'V'ܜ;onWb4N݇!dpru诠! چU+g/ <Җd+f_'+62Z/|xߎ?i]fRfi]ľkb5wbz_J`e5n,\tn35=/W/w% ӿ/D<  Ԯe=zATLmA*>egrb2^WH9v1Fc~jllvz2=gx+>?u~h[qώ7y"x[~o_o!z}yߋjo I 4 *U ؋}Ur ڭ}}* 2.Dŏ] w {`c髯 [A @V-OsÂy{"k%G?);ؠCU >X&O^1kL֙ ⧒!8LJ)K:XUU,#8_*D X }Ѡs|P>t[{fFYgn@ >),!*[<x<ٚhR !U FʵF`׮#,gr 9񦙁sp /b{pz#^f-I4߽z uxj ^'(!jC-yF ӯ[ћfVtsmn.~{++:=K@䃯x콢cya?Ga/j&Σ#;rƾ}-c 2'x;a oҋwZnŽ2y轍2fQTD12䀭CUiLiӳ1Y;~QbU)dfyL.Y+zmk="XJ }*h\n]Gm6FcMBQW̱q$% JXrϥ5J5)t"hg%lh'@NSJ*Y*.PM=sojb46m,%J JC){ nrR!8`YLZ'hRJ 8!$եYk$IiUu-Q}FݮXߏdW|ねլevGCsq7&2] W񔵥p1f'R[Qen?nYW(U{hA V8U+P5purc9l⧞i"%Y!6 D$]^} &UEI dS'NL-O7\ Zw!0&ؙ߭鍚w:ja( Ny'H5-aVn1h;6o@gOePޙ hICQ潃i9it6oᨨ&oQuJVl7+҈P}9Y$w?o(zNok{aƥ狫w8HPqZZxMN%֙ڀnJ=ڃ7CN:DZ֞K,gTC\|"FLM Jx UL>N?~xPl.ÇiL@i *5;\5D\F?DBhȍ^h|0J <+W,:hRJ+ 'o(BjQِ:r16%L.E Bu"ZO:6+.@={00G8: 1i\?mp Lprd"!+{<:I9vb?^\gs~~"1z/)W5:>6q߽O@:=vw529w]e 35Pnĝvs"EjZKіS UjWωTyTt]?w'j)3#y,7ϰQF!̯NӌM#kRPD&rR3 ?Y9Uv\ϯ(LqW5ɥY%H,ΊP|WNh4aL1eC ϻ` aCP c/E㻻GPB;v`p};%SrWΫJ+U@Q Ǒhr`'Zva*b{p0K $K$o.+R)(QƭC!eFmA"au&6&4@t~פZ]}6]>xQ{j|0MCW 0kK\]AÎCSʺy~2b:LKS ~dNY~v%`>s2Zm$sZR>ĜSG_AJ$,Y^ 3cRZhmSֲ1:2S(iY⋺т %NkSy;ѦgggӮ$X|Y!iڰ4h5eu}IG.?9]ʩdإ-^[P?Cbk;M8M3-#KbP`{X@ENo8y&D5HS4ǮXZp1M;=90!0a|:tx"Z9ڋ޵$b[?m @mA_-EcIǿ>U+Q9ܥ}lWꮮ 2cR~ks=.)pL6-ǯuSNr&ssVq%c&õB!@?_o쳀=&1zo\4CzDT ؠEٱvl{J \/:89?L]{T2 (o; pd7h(qڇ!*3)=eN(G8\07틋?8ZHNp\Ps, *\͜E%12@M^ |C}X7d>KXPrN!n8FI՜2:zgE帘j7ո?h4X8(˫:!cnZ*Bg2 +XZᶺ{_"i!Z;j}}6Ӻ7: k/s[s'8u'lN9,TkTٖ9K}tNv0`?{N!X"3iG;ǖr/TXhc3e0~w׾qo\&M5 T\w,#4D>Dw,vIݵ/PMTRlK}UWy9`u1kE]aUǞ ?liG.+Wd^Yu<_.Zv N!cWW@uH?|=.|"C_(|4WMw ÙHzQin[[<=>= |^qH2H)GS92pdeW·y_&N,̫x~t&|4U-` mT= 0CG|(@Qj0YJͽSfjdރn/=ܼ/Y;ȦbVr|ŏbiʀT%Yx@lsp&#N.w[>-dg䀧LSGLC ق.$ d1˸z !7>0HWu}uā9lZ#Rre8%5$kT!sN[߈]B=P,!"9ba4TXs='7>V_|sͧ/CZTTN UjuhL?``g WC3| *u[JgWM(ә=42x`}6KcvYn]cfNJCv)h)6#`ڧXO`]ޡ?oIrկcdX8+.8,6%}{xs{ 9|ϔoRWte y쎢ʹx݂͙GCUk;iW9+u(5sSBp,lSz|hs`zלٙ"#]0:bgr8wd<닡5vY)E.qĀR5BK;[D~lsql7}'0,ږAQ~U TGF$v,jy> þ9M) >(lZ1F }0_'Xg*35LjV, >6*CeQ,o2O'uY~S!1)vqyP$csMEӛw]M:yb2w!u+d{k` LЎdL;tKc'L#98gv!&jEABPlLӔfo?D"MXwy1|iva]WCwcZYWT:k2$adXk wCN,@ң^݉^=lNVu^ՂA[ҥ1{IeAPuS{H;n0uY +uvWCq5INXǣSgٚw,g/=jo쯦:&^ _u^r\r햚Sޘ& 1A%KM]۸tO<-3n|qq;g_s}9ol~:[YcH͐i9-Se\PrpYc ?t<,Ӵ\ Ŕ#gQBo^AvIY}heí{ H.zC3:|,]V7E?z©f]xR1j٤![5m!mO0IuT}<:H^|*A:Gfc^[B93E}IClN-cUt*" Ģ ^ G þlBU[,]9jԭZf)H?;uo%Z~DN α ؊&. d\'3_43|ow#Ҕd=Y**ZyY[+[$sփ5.CF (GyƠv'l~h0J}FvW4c2lU&,@$EFrjS.ȨJ_~WW81pzr4^weh֑q 9;\6L59֮En! eЙ~&I !{-?MV>cQ7P9LFphnI+L+DÄC%cxBto~8Tr^Ȫl" J[KC5A'"$d4+ϖR30r[b{w۪GA/X &}R5˅:>O/_n.zJ{`M?Ϯڢn睬]lu]~_Ǐ^W~͌4{}4OûzƲ|_[Znը.ʗ{kyU'wX^$ĚgN#[B l[ zS_̲_|Ιy`-yvńٱj& VRT\n"9d'(TS-1^+V0PU"SQU2|GNq_ xyn n/B UcBp4/cOH_T-u':Dhm 1C }eزN(6J *_#5\.0)ltmN#K"]c8%u40$ -8\8W~gqdI֗GU ejcYje/>{}I^_>6㦛 4Q:Oղii cq*UjW2󱿉?&-o2D4Ι-jɧ+9a8cӝ#$kvΠkc}L֛&g k4E3~[[μ(-!wōMǁk2~iI)&Q|Aɇl!}О 6.!SBhlD:K|U(=Td$ M7ٓ>Y2B {Iitԧ֕ETh#IRs]9€2.ijAⰑW4@UPm@%y|pAkس=4иr2~<~Z~/'?}&h/iQb՗6I]i KiYJ H|F URHHjc58pfiEa' ^߉WD_P̮r>#tQn{d7<+W>]>N]& o=%5íUAbCoYή~:,&_jmxق[\Am̎Յ6ȹOT閣VF^>w.P Tnt\viB\֖?s:8OB/Ct-lMGuvJ[g/KC"L"eic'ڇHH-ŜY4#!}}$:ېkFFP ٖP n5j.|\Drt뚏P"jH𷓿̘6NV6)E)QO%^9.{T=1q:=O}! ;k::O2Boh6?xZƒאueX%yn|sX26N~U^e8ۻ KYRr>yYGELsvzUqB6gz ֖9M^M/>n]H3V2UK)|G![[Nw4n O)ݟvk_џݺg.[˔F;]*='8EgqޛL~NgRڨaQTн| A .߽m\F^7,j9^|[1/0۔sǡ9ZE^^,ZŃGU6`i*Bb*c@WCΤ,"^F+=.K܏nBz5u颏n)qRF=iVq!őq9I 0Cpiq~uZ>q=}7L쏤y݉esg32ˮ){^D9co4sp ίmr1 d]֐l[%%Y|iCQ]S"͒>"XQCQZ<(N$%+iYj/-ADf16[m aZ;;PzΚS=7ЩLL*2>i)ͪծhMcJ}PB-/R$dJЫWƷ-ǜ#^)§n8gVu`C]X,@̤Zayh)7k5|=3@PvAaP0JQS= 6.&M},%t 2 u@fBQ0QhaJOr MR-Dp*Y@)ړBPТݏ| ] 8Q]D)PxGt~F\!"D-z@@ oXuǠ2E g)*m"\?| La5O#;1 etwT97j4|V3P]4TRCUu<蠙#E9* 5?T*Shnњh{1g<"%2d:q[@ۏ9'>j.kQ ~PL-Q:ȬDwLH-Lq P+<>!iySBmd]v*F8 'q{MAtOM28=X!{vM9,ֽટv~yݘ*]wboE 8\KTzM'bvSӗe;嬇sb/)v3*!y]fi>q0ֵ;o7G.k`uxCF Mj-dʷ$FOίl ec택ކ|NkA}!A</t a4blK>P-ѨU٣)gf*@B)`h7*c[du(nFMRIDuyQn$䙋2n)jurVsorNз!ߧ2W^(X.MN.4' 1(_ټ B2a.@ty-:@᫦a8.Iڰџm Zty Bm(2%Ҥ;d!iwEmcsbza)!fFR'!JYahί]@a5.EKTij vBȧ]#0q+ tp38 %u,?3Vh!9ޡzLQF MjNx?9+l@>yUzNJv%i=:U-bځۤa*?1k30#hN.,pϯ1Y&,md]O4RE VRxlE y~嬰i#7AQѵ ,1Pu/Wx=/)+Cy?9+lB>3Х)Siȓ*h6Pbnnt`2TtrC OEt4E:)<|Gbxno ')Aof.oX]܊4b@uz5ҳi}h"FKyp-f{׶nC}I"9HsAӃ|}9@رgϞ lO%iZ)ckϝ6'Rm^w{Zx7R:-QWD_BuAx_ozΣ\>lXzB?gfH$wӫ`;r:B0o!K>x!YH9P&Xr&0{RxLɪ<(T)͔R' _t,Gc Տcaw8 z[B f7/mQ=X[i}3$pRZ"8w.|a8 ׍},sh0c :緧g:i_tJdGMQ?2?di_/|-{dpo"u03oz(Da-tDQLfĞNa78{cqEybyjߓFs镰%QK靥oX*x oX_F5YvDS3Qek'@]@8z$W3RϤ5Y{g);CI O4='2Y>0!2uƴ. ,^]0L)> cTZMb2dd@$mW6ՀdXc`g/ܵwU@5Z7nHL($Q~e!i/CJpZgw6PcEg#6`I{sjG=L +B̉1UF]q ϒ %gKg D0Gӣ8IA$k}߶ۍFȨz\k C<)zru(&|3aF[6FM$-ɿNxա@ln@3V˶ro:aaHϜhίA. iP{Oԟۓ=B9ϗ?oWA }d Ϝy 0|:|S k@@NwlumyC+;rY\Bs c_{0gQ][GBo^{ښeTe6K:/?q壔$PPV&-s^M:|J YKJ#o1UmK\%nL}-ܢj3?X%R esl cgf~O u>YpeOP+O},WteQR},S8 c 7;K"KΣGA&ne'.%`%9]ۃԵ MR %1\T,SI'[Pgw*T UP?/FOǦrwĩk7^V_͎/`iGnZ0-$؊M,UPoiᙍYvβjgx S ~Ğ\Vփw9 6{*ء~눴,J[A -m )ygޞ~H+SDߚNnMĽJ dDap"ST\:|ݣU0BZ)}˲"nrjdJ |$rW _Q[/ J@I 14995ms (13:43:58.891) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[1962280142]: [14.995360076s] [14.995360076s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.891999 4769 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.892619 4769 trace.go:236] Trace[2111744768]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:44.350) (total time: 14542ms): Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2111744768]: ---"Objects listed" error: 14541ms (13:43:58.892) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2111744768]: [14.542023122s] [14.542023122s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.892667 4769 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.893366 4769 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.894680 4769 trace.go:236] Trace[2098567548]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:44.115) (total time: 14779ms): Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2098567548]: ---"Objects listed" error: 14779ms (13:43:58.894) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2098567548]: [14.779246253s] [14.779246253s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.894712 4769 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: E0122 13:43:58.897034 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.935138 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.941375 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:58 crc kubenswrapper[4769]: E0122 13:43:58.995467 4769 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.372373 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 13:38:58 +0000 UTC, rotation deadline is 2026-12-15 23:09:48.04233864 +0000 UTC Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.372445 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7857h25m48.669901516s for next certificate rotation Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.825499 4769 apiserver.go:52] "Watching apiserver" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.827395 4769 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.827935 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828415 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.828630 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.828660 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828725 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829035 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829067 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.829113 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829915 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831040 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831874 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831907 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832196 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832337 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832430 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.833777 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.835284 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:01:00.07186535 +0000 UTC Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.835375 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.858097 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.869616 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.881677 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.894656 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.904418 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.916655 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.925011 4769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.928667 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.942661 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.959313 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.991232 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.993114 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" exitCode=255 Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.993206 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d"} Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000431 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000479 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000541 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000565 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000773 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000892 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000964 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000991 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001267 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001336 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001944 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001974 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002525 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002460 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002611 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002967 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003034 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003143 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003227 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003254 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003514 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003917 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.004868 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.004885 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005085 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005370 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005444 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005481 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005550 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005577 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005645 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005654 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005701 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005810 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005839 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005848 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005888 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005916 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006019 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006097 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006123 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006178 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006230 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006258 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006351 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006354 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006408 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006435 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006456 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006477 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006498 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006544 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006565 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006586 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006631 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006655 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006683 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006708 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006758 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006783 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006859 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006895 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006920 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006943 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006970 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006995 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007019 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007041 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007061 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007081 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007128 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007173 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007194 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006381 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006498 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006514 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006758 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006786 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006941 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007086 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007118 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007211 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007216 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.007221 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.507198997 +0000 UTC m=+19.918308926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007395 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007418 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007449 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007616 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007633 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007832 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007983 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008064 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008179 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008266 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008291 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008488 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009559 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009585 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009844 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009967 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010037 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010298 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010385 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010405 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010438 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010461 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010486 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010534 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010554 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010578 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010581 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010602 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010653 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010680 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010680 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010706 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010762 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010809 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010888 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010936 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010953 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010990 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011028 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011049 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011084 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011122 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011157 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011198 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011233 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011256 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011272 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011353 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011370 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011387 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011404 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011420 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011439 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011457 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011476 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011493 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011511 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011528 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011554 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011603 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011626 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011657 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011683 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011709 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011805 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011836 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011922 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011947 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012022 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012044 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012120 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012189 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012239 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012266 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012315 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012338 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012361 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012385 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012410 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012460 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012482 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012533 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012584 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012607 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012629 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012657 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012680 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012705 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010760 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010949 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011042 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011191 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011437 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011455 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011648 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011931 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012783 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012954 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012979 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013032 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013058 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013086 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013111 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013136 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013160 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013239 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013264 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013313 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013340 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013365 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013449 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013497 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013522 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013548 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013672 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013706 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014010 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014065 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014216 4769 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014235 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014252 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014267 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014283 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014297 4769 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014310 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014323 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014337 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014350 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014362 4769 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014375 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014400 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014414 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014426 4769 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014439 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014452 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014465 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014479 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014493 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014506 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014518 4769 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014531 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014545 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014559 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014600 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014614 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014628 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014641 4769 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014653 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014666 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014681 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014695 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014709 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014724 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014738 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014754 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014778 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014807 4769 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014821 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014835 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014849 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014863 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014875 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014887 4769 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014899 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014914 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014927 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014940 4769 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014953 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014965 4769 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014977 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014989 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015001 4769 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015013 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015026 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015038 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015051 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015064 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015077 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015091 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015106 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015120 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015132 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015145 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015159 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015171 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015183 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016109 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019230 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012734 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013004 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013281 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013521 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014083 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014629 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015461 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015858 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015899 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016206 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016536 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016711 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016867 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016932 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016936 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017086 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017163 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017296 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017309 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017620 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017641 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017617 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018007 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018357 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018376 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018528 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018702 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018766 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018875 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019270 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019807 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020182 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020259 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020346 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020674 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021014 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021041 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021188 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021544 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021577 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021659 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021774 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021902 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022069 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022306 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022410 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022691 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022911 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023235 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023579 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024309 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.024595 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.024648 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.524629429 +0000 UTC m=+19.935739358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024829 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024958 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025670 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025764 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026175 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026409 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026774 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027743 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027785 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028181 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028221 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028207 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028772 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029134 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029182 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.030068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.030416 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031152 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031498 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031657 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031861 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032062 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032523 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032866 4769 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.033636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.034681 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.035277 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.035897 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036040 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036953 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.037356 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.037467 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.038247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.038284 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.038312 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.038441 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.538417083 +0000 UTC m=+19.949527022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.039248 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.039706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.040414 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047316 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047339 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047529 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.055998 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056286 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056812 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056939 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.057271 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.058055 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.058997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.059147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.059561 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.060018 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061874 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061928 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061946 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062024 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062051 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062064 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062030 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.562005037 +0000 UTC m=+19.973114966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062143 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.5621252 +0000 UTC m=+19.973235129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.063399 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.065266 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.065955 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.066007 4769 scope.go:117] "RemoveContainer" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.067145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.067189 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.071104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.071220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.075688 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.081759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.097540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.099174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.107180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118441 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118536 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118686 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118697 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118709 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118750 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118763 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118776 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118881 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119040 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119062 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119075 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119089 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119359 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119386 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119434 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119447 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119460 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119471 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119666 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119678 4769 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119690 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119741 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119753 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119764 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120029 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120051 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120066 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120080 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120095 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120106 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120118 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120130 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120143 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120154 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120166 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120178 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120190 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120202 4769 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120213 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120224 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120235 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120246 4769 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120257 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120269 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120280 4769 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120302 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120316 4769 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120328 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120340 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120351 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120363 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120374 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120385 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120397 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120409 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120420 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120431 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120444 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120457 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120468 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120480 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120492 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120506 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120518 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120530 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120541 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120552 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120564 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120576 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120592 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120603 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120615 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120626 4769 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120640 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120651 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120663 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120674 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120686 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120700 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120711 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120722 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120734 4769 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120746 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120758 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120770 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120782 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121520 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121533 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121548 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121573 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121587 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121598 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121611 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121623 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121634 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121646 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121657 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121669 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121680 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121692 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121704 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121716 4769 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121727 4769 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.124029 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121739 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138624 4769 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138642 4769 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138656 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138666 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138693 4769 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138701 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138710 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138719 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138728 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138737 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138747 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138755 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138763 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138771 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138779 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138859 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138869 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138876 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138885 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138894 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138901 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138909 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138917 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138925 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150685 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150884 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.151023 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.158645 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.171077 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.183948 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459 WatchSource:0}: Error finding container 73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459: Status 404 returned error can't find the container with id 73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459 Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.204892 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1 WatchSource:0}: Error finding container 4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1: Status 404 returned error can't find the container with id 4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.213856 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-x582x"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.214395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.215162 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hwhw7"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.215500 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219306 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219372 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219321 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219449 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219572 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223153 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223337 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223448 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239411 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239433 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239455 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239505 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.279223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.304123 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.315995 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.325100 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340662 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340699 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340749 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.341545 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.344716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.349091 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.359500 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.371730 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.379964 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.396905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.418400 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.437821 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.454371 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.467423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.478729 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.487767 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.499231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.509297 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.524274 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.536030 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542481 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542492 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542449225 +0000 UTC m=+20.953559154 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542528 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542514888 +0000 UTC m=+20.953624817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542630 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542737 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542716053 +0000 UTC m=+20.953826052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.546154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.550769 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.560618 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.565264 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34fa095e_fc7f_431c_8421_1178e63721ac.slice/crio-6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36 WatchSource:0}: Error finding container 6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36: Status 404 returned error can't find the container with id 6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.567580 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.583585 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0af8746_c9f0_48e6_8a60_02fed286b419.slice/crio-cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51 WatchSource:0}: Error finding container cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51: Status 404 returned error can't find the container with id cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.642829 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.642924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643064 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643106 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643122 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643136 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643158 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643180 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643195 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.643175862 +0000 UTC m=+21.054285811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643263 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.643245933 +0000 UTC m=+21.054355872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.659891 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fclh4"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.660169 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d9wdl"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.660692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.661029 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663036 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663148 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663240 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664012 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664266 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664849 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.685045 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.703575 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.721663 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.735761 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.739709 4769 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740119 4769 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740153 4769 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740141 4769 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740458 4769 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740841 4769 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740899 4769 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.741016 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.50:40852->38.102.83.50:6443: use of closed network connection" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741331 4769 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741362 4769 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741383 4769 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741398 4769 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741419 4769 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741585 4769 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741743 4769 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741806 4769 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741836 4769 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741835 4769 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741874 4769 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741897 4769 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741910 4769 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741928 4769 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741954 4769 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741961 4769 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741978 4769 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741970 4769 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743302 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743334 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743369 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743402 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743418 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743444 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743472 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743486 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743530 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743546 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743559 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743573 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743589 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743623 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743636 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743698 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.765989 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.790947 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.807103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.822692 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.835444 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:17:21.611269991 +0000 UTC Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.837041 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845161 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845678 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845885 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845708 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846104 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846428 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846688 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846447 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846897 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847230 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847250 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847290 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847314 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847335 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847339 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847469 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847522 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847733 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847899 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848044 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847889 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848138 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.849783 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.863779 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.891626 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.892180 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.892690 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.893493 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.894174 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.894743 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.896070 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.896641 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.897527 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.898176 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.900430 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.900988 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.902125 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.903145 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.903696 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.904695 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.905253 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.906271 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.906679 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907267 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907440 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.908338 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.908843 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.910099 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.910524 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.911543 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.912077 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.912650 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.913915 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.914490 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.914898 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.915415 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.919690 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.920374 4769 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.920482 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.922873 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.923676 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.924660 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.926168 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.926864 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.927836 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.928546 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.929628 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.930275 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.931423 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.931998 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.932093 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.933092 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.933546 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.934468 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.935043 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.936160 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.936652 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.937574 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.938204 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.939004 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.939924 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.940547 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.947909 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.971387 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.976531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.984090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.988898 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd0cf7bc_a4fc_4a12_aafc_28598fdd5d76.slice/crio-ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88 WatchSource:0}: Error finding container ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88: Status 404 returned error can't find the container with id ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.989104 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.004026 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1"} Jan 22 13:44:01 crc kubenswrapper[4769]: W0122 13:44:01.005999 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4186e93_df8a_49d3_9068_c8b8acd05baa.slice/crio-6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8 WatchSource:0}: Error finding container 6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8: Status 404 returned error can't find the container with id 6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8 Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012723 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.015753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x582x" event={"ID":"34fa095e-fc7f-431c-8421-1178e63721ac","Type":"ContainerStarted","Data":"5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.015925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x582x" event={"ID":"34fa095e-fc7f-431c-8421-1178e63721ac","Type":"ContainerStarted","Data":"6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.019504 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.019728 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.024849 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027239 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027305 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"72d69196859e0025d5f218cae9fe1ef484c08e68e44d261a30b1576c71ad4753"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.040748 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.041237 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.041765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.043956 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.044333 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.044525 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.045549 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.046342 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.046348 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.051230 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.051605 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052543 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052576 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052689 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053068 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053140 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053196 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053338 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053368 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053398 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053499 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053575 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.054464 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.057061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerStarted","Data":"ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.112993 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156415 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156461 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156542 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156579 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156594 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156612 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156686 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156730 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156852 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157009 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157245 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157338 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157373 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157970 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158072 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158075 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158843 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.159493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.165293 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.169335 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.206649 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.230258 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.277228 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.310983 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.326288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.342045 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.358379 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.378573 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.394706 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.417885 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.438241 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.442466 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.476148 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.518103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.558951 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561384 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561571 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561534882 +0000 UTC m=+22.972644801 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561783 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561832 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561857 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561849871 +0000 UTC m=+22.972959800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561874 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561864681 +0000 UTC m=+22.972974610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.599505 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.607596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.628671 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.662884 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.662965 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663089 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663130 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663135 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663144 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663156 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663170 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663223 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.663201113 +0000 UTC m=+23.074311042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663245 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.663238074 +0000 UTC m=+23.074348003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.667638 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.707844 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.709415 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.727605 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.767910 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.787573 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.827378 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.827625 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.835875 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:50:58.412647706 +0000 UTC Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.875287 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882280 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882421 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882461 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882418 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882567 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882732 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.916442 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.929517 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.948016 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.987885 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.008011 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.039700 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.047867 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.067949 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.070997 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.071060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.074318 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7" exitCode=0 Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.074374 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076468 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" exitCode=0 Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"e2d3c55e05f15106417cacacd13bd2ff48a7d39f5b85eb5a6e946e2cf2413457"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.088014 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.098870 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102657 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.128381 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.150755 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.201933 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.208318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.233153 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.245924 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bqn6j"] Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.246360 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.250219 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268202 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268296 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268560 4769 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268964 4769 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.289258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.328776 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.332390 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.337923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338418 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.347274 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.368626 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.368695 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369035 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369108 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374159 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.387835 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.393981 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.407782 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.408167 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415326 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.427685 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.428749 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.429019 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431890 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.466270 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.467598 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.504319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533702 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.536582 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.567300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.580724 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.616996 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637510 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637569 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.658739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.695498 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.743183 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745460 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.774874 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.815574 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.837048 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:30:58.990195912 +0000 UTC Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.854806 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.894739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.937905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950519 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.978635 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.015808 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052852 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052861 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.057459 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.081481 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bqn6j" event={"ID":"16fc232a-07ad-4611-8612-7b1c3f784c14","Type":"ContainerStarted","Data":"55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.081538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bqn6j" event={"ID":"16fc232a-07ad-4611-8612-7b1c3f784c14","Type":"ContainerStarted","Data":"327fde5cbfec4910b000d0772fd70a5e06aec89502e45c3ffe43507237f307c3"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.084742 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe" exitCode=0 Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.084843 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090311 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090457 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090558 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090641 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090836 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.102291 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.141118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157831 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157880 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.173708 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.221911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262274 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262658 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.295679 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.343082 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364552 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364662 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.380861 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.416294 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.465961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.510757 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.542380 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569419 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569443 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.574894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582172 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.582276 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.582254794 +0000 UTC m=+26.993364723 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.582876 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583249 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.583190528 +0000 UTC m=+26.994300487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583299 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583453 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.583426564 +0000 UTC m=+26.994536523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.614889 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.657773 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671572 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671619 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671640 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.684482 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.684546 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684671 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684677 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684699 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684710 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684714 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684726 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684773 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.684754297 +0000 UTC m=+27.095864226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684816 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.684783498 +0000 UTC m=+27.095893427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.694994 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.735076 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.777888 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.819655 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.837940 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:42:51.629259116 +0000 UTC Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.858418 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.877021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.877036 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883181 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883216 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883225 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883305 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883479 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883561 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.901377 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.943113 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.976732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979298 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.015044 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.063254 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082689 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082824 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.095953 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8" exitCode=0 Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.096027 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.097969 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.099642 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.143231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.179170 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186293 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.222180 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.258429 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289046 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.301022 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.338277 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.373772 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.415925 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.460151 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.502669 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.535850 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.577154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596167 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.621025 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.665203 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698215 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.700964 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.742889 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800518 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.838186 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:56:38.970668026 +0000 UTC Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903210 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005575 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.104496 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e" exitCode=0 Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.104583 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.110868 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111380 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.125312 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.137703 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.152761 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.166077 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.184354 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.201835 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.222617 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.261469 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.274718 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.289928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.308046 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.318951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.318993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319049 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.322825 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.335844 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.358995 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.371948 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422319 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422397 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525887 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629417 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733421 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733435 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835624 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.839165 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:28:14.126013778 +0000 UTC Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882589 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.882692 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882829 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882883 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.883084 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.883250 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938508 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041387 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.116686 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c" exitCode=0 Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.116742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.139744 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.161390 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.177856 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.195519 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.216585 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.229833 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.244123 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248818 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.255231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.267334 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.286008 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.297405 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.313694 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.330156 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.347894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.361054 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454969 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558282 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.662009 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765879 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.840241 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:50:28.167947119 +0000 UTC Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868972 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.869039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.869063 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.075666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076063 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076548 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.133895 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac" exitCode=0 Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.133937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.165768 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.196852 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.219844 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.239040 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.253363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.271188 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282397 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.290427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.305832 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.320859 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.335414 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.348977 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.360780 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.373905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.384598 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385644 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.396511 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488227 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650498 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650641 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650670 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650633219 +0000 UTC m=+35.061743178 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650723 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650708461 +0000 UTC m=+35.061818470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650736 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650930 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650889405 +0000 UTC m=+35.061999334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693849 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.751217 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.751294 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751408 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751415 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751473 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751504 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751582 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.75155878 +0000 UTC m=+35.162668749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751426 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751633 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751673 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.751660833 +0000 UTC m=+35.162770802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796331 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.841280 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:07:03.693814485 +0000 UTC Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882942 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882865 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883148 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883054 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883322 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899667 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106659 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.145362 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerStarted","Data":"f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.149968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150245 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150486 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.176456 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.177873 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.178278 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.189809 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.201134 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.217426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.237433 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.260185 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.282160 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.294638 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.307041 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312228 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.321983 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.333227 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.346388 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.357490 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.371476 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.386971 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.403565 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.417685 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.433493 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.446350 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.467420 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.490760 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.515389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518094 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518112 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.530896 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.546878 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.568413 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.585019 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.606764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.620856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.636423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.666727 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.676667 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.723974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826496 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.842144 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:38:21.188186627 +0000 UTC Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930261 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033704 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136214 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.238978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444473 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.546637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.546959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547345 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649606 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.842333 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:32:57.983804199 +0000 UTC Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853997 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.854010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.854018 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883311 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883398 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883316 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883510 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883635 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957690 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060497 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.159457 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.163843 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" exitCode=1 Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.163903 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.164680 4769 scope.go:117] "RemoveContainer" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.188283 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.215310 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.235934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.247894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.271152 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.281827 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.294174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.307988 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.321201 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.340629 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.355330 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.366652 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367782 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.386185 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.399238 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.413809 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.471010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.471024 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573635 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675962 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675996 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778631 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.842873 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:38:58.229576558 +0000 UTC Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881194 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.901324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.916360 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.931633 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.949595 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.974038 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983565 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.062223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.082666 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085244 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.096905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.099295 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c028db8_99b9_422d_ba46_e1a2db06ce3c.slice/crio-21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a.scope\": RecentStats: unable to find data in memory cache]" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.114732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.126344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.140741 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.152855 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.166160 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.169053 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.171501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.171846 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.178855 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187965 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.198236 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.216968 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.232263 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.247447 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.261056 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.271841 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.283287 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.297466 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.307684 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.320888 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.343233 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.372630 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.386650 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392852 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392942 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.406028 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.425972 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.448691 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496425 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496470 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703188 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.844033 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:02:13.137322602 +0000 UTC Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882724 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.882881 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882738 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.883051 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.883243 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908986 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114693 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.176394 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.177414 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180399 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" exitCode=1 Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180453 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180499 4769 scope.go:117] "RemoveContainer" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.181027 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.181215 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.201830 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.215459 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.220900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221106 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.239103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.255343 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.270263 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.291384 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.362942 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.380002 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.395732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.408480 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.426951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427386 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.425920 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427684 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.443538 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.457228 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.467415 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.530893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.560356 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565205 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.585315 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590420 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.610619 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.615048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.628306 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.631969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.649902 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.650019 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753943 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.845097 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:25:26.204155786 +0000 UTC Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856896 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856940 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.964530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965104 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068378 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068398 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171106 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171191 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.187004 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.192420 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.192693 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.213851 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.229961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.246338 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.262305 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274870 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.275746 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.290683 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.306453 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.317538 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.333058 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.350842 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.378158 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.378943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379631 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.406294 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.422750 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.437355 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.450916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.483001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.483078 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.498841 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf"] Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.499517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.502323 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.502603 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.519273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.535238 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.550293 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.566680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.584871 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586336 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.607553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616189 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616285 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.628394 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.642090 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.654486 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.668813 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688799 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688836 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.689223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.700721 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.713215 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.716975 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717202 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.718032 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.718279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.726181 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.734055 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.739410 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.749669 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.760739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791113 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.817256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: W0122 13:44:13.831045 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c69aef_2c74_4731_8334_85c8c755be74.slice/crio-5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968 WatchSource:0}: Error finding container 5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968: Status 404 returned error can't find the container with id 5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968 Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.846045 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:45:32.05931606 +0000 UTC Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.883294 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.883438 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.883548 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.883682 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.884201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.884566 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897438 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897478 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000816 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103630 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.196164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206160 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309708 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.411999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412115 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514628 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.615692 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.616493 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.616586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617772 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.638433 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.657197 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.672358 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.694570 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.715217 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719842 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719900 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.727294 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.727395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.750647 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.764426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.781362 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.797236 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.816600 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.826751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.826913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827682 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.828222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.828377 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.828435 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.828567 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.328533507 +0000 UTC m=+34.739643486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.835409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.847301 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:53:16.465001951 +0000 UTC Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.851911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.856407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.876611 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.896467 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.916426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.929892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.941934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.942766 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.952743 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.967253 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.978324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.988994 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.004138 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.015155 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.027712 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032304 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.048261 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.063291 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.082154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.096372 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.114860 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135208 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135225 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135236 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.140769 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.157548 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.177324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.195646 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.201963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.202030 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.212136 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.228725 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.253048 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.272189 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.285742 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.299614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.316885 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.333012 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.333135 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.333182 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:16.333168195 +0000 UTC m=+35.744278134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.336370 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.352190 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.368037 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.418582 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.431934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443270 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.447682 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.466637 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.480928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.502215 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.518014 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.532404 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546598 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.738420 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.738597 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.738567157 +0000 UTC m=+51.149677126 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.738684 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.739079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739131 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739246 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739534 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.739429539 +0000 UTC m=+51.150539508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739679 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.739549182 +0000 UTC m=+51.150659151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752869 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.840200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.840281 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840498 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840520 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840534 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840598 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.840582448 +0000 UTC m=+51.251692377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841042 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841114 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841138 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841251 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.841219104 +0000 UTC m=+51.252329063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.847778 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:35:06.690870987 +0000 UTC Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856289 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.882952 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.883066 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883148 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.883202 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883358 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883558 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959280 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165233 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.346873 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.347067 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.347482 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:18.347458065 +0000 UTC m=+37.758568004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370714 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.474147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.577933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.577994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578011 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578075 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681228 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784316 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.848335 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:13:44.060995542 +0000 UTC Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.883438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.883674 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887118 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.989885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990754 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197631 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197688 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300154 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403981 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506570 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506762 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610236 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713328 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815863 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.849496 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:46:09.288202081 +0000 UTC Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883136 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883296 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883313 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883402 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883468 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919976 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023238 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231575 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231599 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.371286 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.371442 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.371730 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:22.371711139 +0000 UTC m=+41.782821078 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438505 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.542153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.542876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646425 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749424 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.850029 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:27:42.450948691 +0000 UTC Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851603 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.883289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.883586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954103 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.058001 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.160998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161070 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264784 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264844 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367988 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.368000 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471341 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573813 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676178 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779128 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.850150 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:50:00.93928525 +0000 UTC Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882176 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882291 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882422 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882530 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882636 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087147 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087190 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.189922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190076 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292618 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396421 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499674 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602411 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602473 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.807974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808055 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.850688 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:17:34.187882344 +0000 UTC Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.883193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:20 crc kubenswrapper[4769]: E0122 13:44:20.883416 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.911960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912086 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.915577 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.934440 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.951049 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.966680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.984132 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014426 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014602 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.036145 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.050592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.069109 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.089533 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.106235 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118673 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.128309 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.147427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.160683 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.180257 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.195007 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.208133 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325096 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.427960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428740 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.531981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532124 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635121 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635159 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738517 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738529 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841926 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.851189 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:30:43.072248712 +0000 UTC Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882877 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882933 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883035 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882896 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883259 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883335 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945392 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.151897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.151985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357996 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.416973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.417299 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.417446 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:30.417417125 +0000 UTC m=+49.828527094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460253 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562639 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653910 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653968 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653995 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.667566 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673608 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.691318 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696206 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.723227 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729623 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.748210 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754111 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754142 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754156 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.769408 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.769538 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771203 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.852236 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:36:49.922168606 +0000 UTC Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873938 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.883250 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.883363 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977283 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.081001 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286269 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388842 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491956 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.595004 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.595021 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699661 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802363 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.852994 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:21:30.736238562 +0000 UTC Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882549 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.882706 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882549 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882869 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.883023 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.883216 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905371 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008138 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111330 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214209 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.521996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522157 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728655 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728708 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831378 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.853646 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:32:58.444640367 +0000 UTC Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.883424 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:24 crc kubenswrapper[4769]: E0122 13:44:24.883628 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.934864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.936105 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140853 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.243776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244384 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.347920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450970 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.656964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657049 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.853953 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:06:15.353581727 +0000 UTC Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.862003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.862015 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882429 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.882553 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882975 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.883116 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.883342 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.883422 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.966011 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.966033 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170574 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.245340 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.247546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.248088 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.268558 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272198 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.281732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.297566 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.315249 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.337148 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.358800 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374523 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.379315 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.417361 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.440403 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.457455 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.470065 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477201 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.480200 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.491338 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.500674 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.512726 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.523230 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.532350 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579692 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785176 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.854348 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:08:27.953493574 +0000 UTC Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.883873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:26 crc kubenswrapper[4769]: E0122 13:44:26.884490 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888366 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990928 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990988 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.991005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094428 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094496 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.254524 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.255837 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261001 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" exitCode=1 Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261149 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.262265 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.262615 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.284693 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300422 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.303047 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.314479 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.325866 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.339720 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.353839 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.372701 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403575 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.441991 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.457344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.472027 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.486087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.510711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.523145 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.541127 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.553735 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.573243 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.584591 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711341 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813974 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.854996 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:27:48.10866075 +0000 UTC Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.882776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.882832 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.882986 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.883109 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.883252 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.883361 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916603 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123424 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227139 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227162 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.268540 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.273745 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:28 crc kubenswrapper[4769]: E0122 13:44:28.274085 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.293916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.312746 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.329917 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330419 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330550 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.352327 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.367189 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.384462 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.404208 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.428006 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434402 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.453288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.472784 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.493304 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.511165 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.531302 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536834 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.549139 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.565087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.581595 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.595389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743905 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.855259 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:34:56.286933476 +0000 UTC Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.882971 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:28 crc kubenswrapper[4769]: E0122 13:44:28.883158 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950611 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054135 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054268 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158817 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261625 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363908 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570657 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.673986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674072 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777415 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.855371 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:04:24.640498606 +0000 UTC Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883227 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883227 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883407 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883252 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883483 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883589 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.983930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984167 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191840 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397428 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397501 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499937 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.506971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.507227 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.507328 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:46.507298115 +0000 UTC m=+65.918408084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.603027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.603043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706949 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706973 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.855865 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:11:20.958829379 +0000 UTC Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.883309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.883861 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.901352 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.914668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915458 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.920638 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.935485 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.952157 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.967674 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.991944 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018426 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018916 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.021853 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.037225 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.051458 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.065621 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.082289 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.100931 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.116179 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.133164 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.147098 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.159678 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.173597 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223235 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325717 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532302 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.634947 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.634992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635029 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737757 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.819732 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.819945 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820022 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.819987653 +0000 UTC m=+83.231097632 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820088 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.820144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820168 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.820146157 +0000 UTC m=+83.231256126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820442 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820591 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.820562888 +0000 UTC m=+83.231672927 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.840995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841119 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.857764 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:33:00.423666844 +0000 UTC Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883148 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883321 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.883440 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883515 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.883742 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.884049 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.921328 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.921446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921508 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921542 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921559 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921623 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.921601623 +0000 UTC m=+83.332711582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921647 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921678 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921702 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921778 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.921744277 +0000 UTC m=+83.332854246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944887 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944944 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047324 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150615 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254139 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356425 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460209 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562916 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664652 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664787 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.858034 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:59:40.008294427 +0000 UTC Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870392 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.882491 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:32 crc kubenswrapper[4769]: E0122 13:44:32.882696 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973900 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000583 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.021147 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025336 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.041945 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047716 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.069150 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073552 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073614 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.097507 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.102995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.120176 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.120557 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122464 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327814 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533948 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739881 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842689 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.858365 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:33:08.90780158 +0000 UTC Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882751 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882753 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.882941 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.883104 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.883433 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049224 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151995 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255767 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359210 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.566005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669396 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669440 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.856364 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.859044 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:27:44.644797544 +0000 UTC Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.867096 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.873220 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.875203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.875294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.883205 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:34 crc kubenswrapper[4769]: E0122 13:44:34.883478 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.895564 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.913961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.929069 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.943711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.967174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978449 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978487 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.984377 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.000665 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.013552 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.033161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.051561 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.067196 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.082423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083448 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083472 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.100197 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.113751 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.130031 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.147553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187737 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187760 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290502 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393585 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496395 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600714 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600733 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807912 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.859742 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:24:42.354756485 +0000 UTC Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883064 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883273 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883088 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883390 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883064 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883446 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910775 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013940 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117135 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117146 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.220026 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323437 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323455 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427359 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.529855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.529983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530061 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633255 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737642 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.840900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841089 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.860248 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:11:48.866439463 +0000 UTC Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.883380 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:36 crc kubenswrapper[4769]: E0122 13:44:36.883561 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944773 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.047996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048123 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150907 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.253951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254058 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356323 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459118 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.562021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.562043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665409 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769203 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.861140 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:15:17.687543272 +0000 UTC Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872215 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882571 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882672 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882714 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882879 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974954 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077683 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180131 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.386026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.386045 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489513 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593611 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696828 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799837 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.861610 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:15:08.532431295 +0000 UTC Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.882938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:38 crc kubenswrapper[4769]: E0122 13:44:38.883143 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006118 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006137 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109779 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214244 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214267 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214284 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317658 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421121 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627565 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835418 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.861766 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:59:49.614863995 +0000 UTC Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882428 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882563 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.882756 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.882964 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.883156 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938530 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041130 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350862 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558981 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662477 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765839 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.862120 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:41:12.421339661 +0000 UTC Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.869964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870148 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.882758 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:40 crc kubenswrapper[4769]: E0122 13:44:40.883028 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.906307 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.919731 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.941176 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.956133 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.968254 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973604 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.980779 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.999728 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.011847 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.024984 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.037354 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.053206 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.069527 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.078003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.078108 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.081091 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.090865 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.101452 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.112070 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.126296 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.139262 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180715 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283659 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386470 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522703 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625607 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728334 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831402 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.863095 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:02:08.861932752 +0000 UTC Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882642 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882661 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.882968 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.883100 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.883261 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.884445 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.884752 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934972 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344854 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447378 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550561 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653282 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756523 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859100 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.864236 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:06:05.309891746 +0000 UTC Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.882822 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:42 crc kubenswrapper[4769]: E0122 13:44:42.882982 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961281 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.064005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.269012 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.399742 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.425766 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430445 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.448695 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455862 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.470776 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.487577 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.487687 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489312 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.592004 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.592015 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.695012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.695022 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797845 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.864931 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:30:56.535976803 +0000 UTC Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882384 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882474 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882889 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.003750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004386 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004500 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209361 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312189 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414723 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.518036 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622881 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725604 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725646 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828424 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.865905 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:42:37.072493181 +0000 UTC Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.883953 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:44 crc kubenswrapper[4769]: E0122 13:44:44.884132 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931873 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034442 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136247 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.238744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239211 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556360 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761425 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.864013 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.866195 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:06:00.218959182 +0000 UTC Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882628 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882651 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882631 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882759 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882848 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882902 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966369 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068591 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171206 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273964 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375913 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478324 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.579866 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.580066 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.580144 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:45:18.580123383 +0000 UTC m=+97.991233312 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786880 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.866282 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:28:11.973655071 +0000 UTC Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.882874 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.883034 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888302 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092614 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194726 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352301 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352489 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" exitCode=1 Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.353311 4769 scope.go:117] "RemoveContainer" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.370382 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.381209 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.394646 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400301 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.406007 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.417096 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.426257 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.438969 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.448451 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.458966 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.469373 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.478131 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.489037 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.502703 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.502973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503076 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.517807 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.534451 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.552015 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.563345 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.590273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606219 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708132 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812383 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.867428 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:54:46.58243414 +0000 UTC Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883056 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883086 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883203 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883222 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883393 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017499 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119669 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324843 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.357566 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.357621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.367820 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.379333 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.389226 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.402949 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.414199 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.426869 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.427998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428072 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.446140 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.470601 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.480839 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.499624 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.512175 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.524358 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529607 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529632 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.535208 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.548603 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.560303 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.572427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.582279 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.593210 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632251 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735635 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735666 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837699 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.869173 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:43:22.507993165 +0000 UTC Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.882819 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:48 crc kubenswrapper[4769]: E0122 13:44:48.883063 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939584 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939593 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041550 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143979 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246978 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349374 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349416 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452259 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554492 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656819 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861186 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.869712 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:04:39.535223809 +0000 UTC Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883142 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883164 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883297 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883343 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883468 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883520 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963264 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963367 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270829 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270873 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372290 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.475017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.475028 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.577981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578125 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.870581 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:23:07.000481195 +0000 UTC Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.884119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:50 crc kubenswrapper[4769]: E0122 13:44:50.884251 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.901890 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.913494 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.929676 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.942444 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.957720 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.976118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.986842 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988585 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.998610 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.012450 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.025142 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.035456 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.044139 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.054244 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.063327 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.076080 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.089213 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090284 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090310 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.101086 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.111409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294889 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397784 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397834 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500489 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602915 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705919 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808200 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.870844 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:38:02.265395619 +0000 UTC Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883127 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883183 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883222 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883379 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883498 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910895 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910971 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218510 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218539 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424588 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527245 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733486 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836489 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.870944 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:24:51.876862183 +0000 UTC Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:52 crc kubenswrapper[4769]: E0122 13:44:52.882461 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938643 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143828 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143854 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246491 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349374 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349468 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453853 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556629 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.599281 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604356 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.619848 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.635215 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639963 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.640003 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.656327 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660429 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.672079 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.672296 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674457 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777034 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.871420 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:07:18.887202669 +0000 UTC Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879284 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882392 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882421 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882486 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882737 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882875 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982768 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.085015 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.085031 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289818 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494928 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494997 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.495017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.495035 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597150 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700748 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803502 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803538 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.872237 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:41:31.469833339 +0000 UTC Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.882737 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:54 crc kubenswrapper[4769]: E0122 13:44:54.882983 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.905938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.905987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906044 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008759 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111830 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215321 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.319011 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421906 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524842 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627703 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730312 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832569 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.872932 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:29:18.638705998 +0000 UTC Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882599 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882630 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882713 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.882889 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.883694 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.883771 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.884245 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936388 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039590 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141901 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.243960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244073 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346641 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.389752 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.392424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.393280 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.417488 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.445196 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449171 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.461668 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.484893 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.497270 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.510932 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.522933 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.534709 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.545867 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551955 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.556972 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.568464 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.591219 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.604552 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.618040 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.631764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.648834 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.664665 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.674887 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858248 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.873676 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:30:11.980090113 +0000 UTC Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.883088 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:56 crc kubenswrapper[4769]: E0122 13:44:56.883237 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961278 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063962 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.064013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.064039 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167141 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269992 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372349 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.396409 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.397030 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399629 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" exitCode=1 Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399694 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.400649 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.401098 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.413118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.422961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.437399 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.446337 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.455758 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.466386 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475096 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.477838 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.492726 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.509520 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.519592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.535457 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.545749 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.558735 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.566864 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577904 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.579702 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.591242 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.601017 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.609913 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680332 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783723 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.874213 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:09:48.623975891 +0000 UTC Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882548 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882587 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.882711 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.882959 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.883015 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887553 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990741 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093535 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.401939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402080 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.406050 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.410099 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:44:58 crc kubenswrapper[4769]: E0122 13:44:58.410264 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.424363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.438532 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.456229 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.474876 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.491090 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.502938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505168 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.514617 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.526911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.536098 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.548101 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.559614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.572563 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.589543 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608130 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.615154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.630613 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.656004 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.670534 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.687321 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710338 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710372 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.875216 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:46:18.272885062 +0000 UTC Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.882963 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:58 crc kubenswrapper[4769]: E0122 13:44:58.883149 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915894 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018861 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121928 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224848 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.327951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328081 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431303 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534769 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638191 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741521 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844313 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.875821 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:45:19.652817626 +0000 UTC Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883246 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883370 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883599 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883727 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883899 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883987 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.898028 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947539 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050947 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154644 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154654 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256767 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462666 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666612 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666707 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871599 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.876696 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:43:04.35947726 +0000 UTC Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.883090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:00 crc kubenswrapper[4769]: E0122 13:45:00.883225 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.905112 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.927479 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.945285 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3ee5efc-8b71-4691-8f78-ff11abb2d770\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0315382a0b43a2b3069391b3c63464c38b94daf1baf2700f5001abca332fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.965300 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974436 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974487 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974501 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.980235 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.994431 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.012723 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.027273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.041304 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.055161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.069120 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076848 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.084081 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.099686 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.116704 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.133352 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.162666 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.177965 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179545 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.196323 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.208825 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281740 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385322 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.487915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590544 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590566 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795426 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795777 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795872 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795947 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.877095 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:20:01.499143232 +0000 UTC Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882418 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882497 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.882543 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.882759 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.883091 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898584 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001247 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207634 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310245 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515280 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.617964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618342 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618762 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.721942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722906 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825719 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825811 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.878263 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:42:32.403972393 +0000 UTC Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.882616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:02 crc kubenswrapper[4769]: E0122 13:45:02.882825 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927535 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.029953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.029998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030037 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134025 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134134 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236937 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339345 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441490 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646677 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646687 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.748980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749094 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.843957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.858511 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.878581 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:25:21.68190389 +0000 UTC Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.882860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.882860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.883018 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.883310 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.884884 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.884962 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.885017 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887239 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887209994 +0000 UTC m=+147.298319963 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887545 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887626 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887603905 +0000 UTC m=+147.298713864 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887687 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887759 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887735388 +0000 UTC m=+147.298845347 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.905346 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.931350 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936208 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936240 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.951975 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.952130 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953935 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.988804 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.988865 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988967 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988982 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988993 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989013 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989042 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989054 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989042 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.989029699 +0000 UTC m=+147.400139628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989110 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.989096041 +0000 UTC m=+147.400205970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056713 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159758 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364948 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468096 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468241 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673774 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776377 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776411 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.878829 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:02:10.581141434 +0000 UTC Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879333 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.882750 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:04 crc kubenswrapper[4769]: E0122 13:45:04.883116 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981267 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.186982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187353 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290684 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393720 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497497 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600905 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704331 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807514 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.879015 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:34:44.391885898 +0000 UTC Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882301 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882388 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882644 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882745 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020851 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123772 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226216 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534199 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.636883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637132 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740219 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.879272 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:43:38.698361382 +0000 UTC Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.882854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:06 crc kubenswrapper[4769]: E0122 13:45:06.883031 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049199 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.459964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460124 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563350 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666446 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.769060 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.879565 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:54:22.06855861 +0000 UTC Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882878 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.882960 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.883039 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.883105 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077750 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181616 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490650 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593230 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.798989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799119 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.879987 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:00:50.451707017 +0000 UTC Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.883449 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:08 crc kubenswrapper[4769]: E0122 13:45:08.883740 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901676 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211371 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314626 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417952 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417961 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520343 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726833 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830187 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.880947 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:36:06.389180386 +0000 UTC Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882450 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882594 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882722 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882762 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933607 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037143 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.140941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141071 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244665 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348421 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348519 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451432 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451482 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658683 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761834 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865214 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.881832 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:19:39.860239255 +0000 UTC Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.883253 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:10 crc kubenswrapper[4769]: E0122 13:45:10.883563 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.920744 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-x582x" podStartSLOduration=71.920720117 podStartE2EDuration="1m11.920720117s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.919592857 +0000 UTC m=+90.330702796" watchObservedRunningTime="2026-01-22 13:45:10.920720117 +0000 UTC m=+90.331830046" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.937714 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fclh4" podStartSLOduration=70.937694511 podStartE2EDuration="1m10.937694511s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.936387616 +0000 UTC m=+90.347497545" watchObservedRunningTime="2026-01-22 13:45:10.937694511 +0000 UTC m=+90.348804440" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.970634 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" podStartSLOduration=70.970606211 podStartE2EDuration="1m10.970606211s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.953321599 +0000 UTC m=+90.364431558" watchObservedRunningTime="2026-01-22 13:45:10.970606211 +0000 UTC m=+90.381716170" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975193 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.007690 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" podStartSLOduration=71.007668103 podStartE2EDuration="1m11.007668103s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.006560323 +0000 UTC m=+90.417670292" watchObservedRunningTime="2026-01-22 13:45:11.007668103 +0000 UTC m=+90.418778072" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077133 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.088348 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.088329181 podStartE2EDuration="37.088329181s" podCreationTimestamp="2026-01-22 13:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.087493909 +0000 UTC m=+90.498603858" watchObservedRunningTime="2026-01-22 13:45:11.088329181 +0000 UTC m=+90.499439110" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.111905 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.111889022 podStartE2EDuration="1m11.111889022s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.110934816 +0000 UTC m=+90.522044755" watchObservedRunningTime="2026-01-22 13:45:11.111889022 +0000 UTC m=+90.522998951" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.127832 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.127814597 podStartE2EDuration="1m13.127814597s" podCreationTimestamp="2026-01-22 13:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.126580535 +0000 UTC m=+90.537690474" watchObservedRunningTime="2026-01-22 13:45:11.127814597 +0000 UTC m=+90.538924546" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178677 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178868 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.196917 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.196896357 podStartE2EDuration="12.196896357s" podCreationTimestamp="2026-01-22 13:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.177083247 +0000 UTC m=+90.588193186" watchObservedRunningTime="2026-01-22 13:45:11.196896357 +0000 UTC m=+90.608006296" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.209764 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.20974641 podStartE2EDuration="1m11.20974641s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.197909863 +0000 UTC m=+90.609019802" watchObservedRunningTime="2026-01-22 13:45:11.20974641 +0000 UTC m=+90.620856339" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.252620 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podStartSLOduration=72.252599327 podStartE2EDuration="1m12.252599327s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.252439682 +0000 UTC m=+90.663549641" watchObservedRunningTime="2026-01-22 13:45:11.252599327 +0000 UTC m=+90.663709256" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384439 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384518 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.487007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.487019 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589959 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.692706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693541 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797730 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882737 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:12:45.712138633 +0000 UTC Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882922 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882964 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883385 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883530 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883648 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901478 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.004921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005099 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108552 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211541 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.314023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.314045 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416753 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416834 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416846 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520604 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624369 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726952 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.831005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.831026 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.883131 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.883196 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:52:29.663298278 +0000 UTC Jan 22 13:45:12 crc kubenswrapper[4769]: E0122 13:45:12.884336 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.933417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.933868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.037838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038361 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141730 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244422 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244466 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.348668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349579 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452869 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556758 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659448 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762737 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865851 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865888 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882225 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882638 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882695 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.882829 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.883184 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.883298 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:48:17.929275016 +0000 UTC Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.883669 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.884045 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.884333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967915 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071259 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:14Z","lastTransitionTime":"2026-01-22T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088216 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:14Z","lastTransitionTime":"2026-01-22T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.147042 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bqn6j" podStartSLOduration=75.147016736 podStartE2EDuration="1m15.147016736s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.26618668 +0000 UTC m=+90.677296619" watchObservedRunningTime="2026-01-22 13:45:14.147016736 +0000 UTC m=+93.558126685" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.149014 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd"] Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.149691 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.151423 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.151952 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.152703 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.158076 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302711 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302854 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302902 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.303036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404277 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404442 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404467 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.406262 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.416974 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.432852 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.472218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: W0122 13:45:14.498105 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f5925a8_3697_41cf_8d8c_6fded7005054.slice/crio-12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c WatchSource:0}: Error finding container 12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c: Status 404 returned error can't find the container with id 12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.882513 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:14 crc kubenswrapper[4769]: E0122 13:45:14.882887 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.883633 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:08:09.28041123 +0000 UTC Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.883762 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.896937 4769 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.464869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" event={"ID":"4f5925a8-3697-41cf-8d8c-6fded7005054","Type":"ContainerStarted","Data":"eaf1b242727cf1d1d8a5c0cf11d0f575370fb51b6259f51fe5fe18e636094896"} Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.465287 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" event={"ID":"4f5925a8-3697-41cf-8d8c-6fded7005054","Type":"ContainerStarted","Data":"12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c"} Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883349 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883391 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883574 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883695 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:16 crc kubenswrapper[4769]: I0122 13:45:16.882283 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:16 crc kubenswrapper[4769]: E0122 13:45:16.882420 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883282 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883326 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883305 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883419 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883491 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883557 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:18 crc kubenswrapper[4769]: I0122 13:45:18.646210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.646379 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.646424 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:46:22.64641082 +0000 UTC m=+162.057520749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:45:18 crc kubenswrapper[4769]: I0122 13:45:18.883187 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.883310 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882875 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882928 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882940 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883079 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883190 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883397 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:20 crc kubenswrapper[4769]: I0122 13:45:20.883061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:20 crc kubenswrapper[4769]: E0122 13:45:20.884051 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882381 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882699 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882741 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882997 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:22 crc kubenswrapper[4769]: I0122 13:45:22.882853 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:22 crc kubenswrapper[4769]: E0122 13:45:22.883023 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883287 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883380 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883442 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:24 crc kubenswrapper[4769]: I0122 13:45:24.882908 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:24 crc kubenswrapper[4769]: E0122 13:45:24.883041 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883185 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883261 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883537 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883670 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883211 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883984 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:26 crc kubenswrapper[4769]: I0122 13:45:26.882619 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:26 crc kubenswrapper[4769]: E0122 13:45:26.883126 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.882672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.882736 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.883124 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.883859 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.883988 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:28 crc kubenswrapper[4769]: I0122 13:45:28.882343 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:28 crc kubenswrapper[4769]: E0122 13:45:28.882834 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:28 crc kubenswrapper[4769]: I0122 13:45:28.882963 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:28 crc kubenswrapper[4769]: E0122 13:45:28.883719 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882606 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.882722 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882823 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.882963 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.883030 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:30 crc kubenswrapper[4769]: I0122 13:45:30.884076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:30 crc kubenswrapper[4769]: E0122 13:45:30.884910 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882820 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882848 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.882930 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.883055 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.883189 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:32 crc kubenswrapper[4769]: I0122 13:45:32.885275 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:32 crc kubenswrapper[4769]: E0122 13:45:32.886083 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.523960 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524705 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524780 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" exitCode=1 Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524874 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8"} Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524929 4769 scope.go:117] "RemoveContainer" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.525711 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.526055 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.554685 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" podStartSLOduration=93.554666563 podStartE2EDuration="1m33.554666563s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:15.487600317 +0000 UTC m=+94.898710256" watchObservedRunningTime="2026-01-22 13:45:33.554666563 +0000 UTC m=+112.965776512" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882698 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882863 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882726 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882929 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882703 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882984 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:34 crc kubenswrapper[4769]: I0122 13:45:34.528953 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:34 crc kubenswrapper[4769]: I0122 13:45:34.882933 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:34 crc kubenswrapper[4769]: E0122 13:45:34.883211 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883087 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883116 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883253 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883416 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883519 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:36 crc kubenswrapper[4769]: I0122 13:45:36.882426 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:36 crc kubenswrapper[4769]: E0122 13:45:36.882581 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.882757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.883611 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.883969 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.884207 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.884483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.884643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:38 crc kubenswrapper[4769]: I0122 13:45:38.883140 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:38 crc kubenswrapper[4769]: E0122 13:45:38.883332 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882379 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882434 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.882512 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.882604 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882973 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.883186 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:40 crc kubenswrapper[4769]: I0122 13:45:40.882740 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:40 crc kubenswrapper[4769]: E0122 13:45:40.884431 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:40 crc kubenswrapper[4769]: E0122 13:45:40.909370 4769 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.007933 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882519 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882514 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882667 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882729 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882543 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882981 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.884937 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.558395 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.561716 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.562853 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.590866 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podStartSLOduration=102.590848122 podStartE2EDuration="1m42.590848122s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:42.588408597 +0000 UTC m=+121.999518536" watchObservedRunningTime="2026-01-22 13:45:42.590848122 +0000 UTC m=+122.001958051" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.883138 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:42 crc kubenswrapper[4769]: E0122 13:45:42.883342 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.968941 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.565526 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.565637 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886352 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886467 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886509 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886675 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886809 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886880 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:44 crc kubenswrapper[4769]: I0122 13:45:44.883329 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:44 crc kubenswrapper[4769]: E0122 13:45:44.883469 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.882962 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.883020 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.882974 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883118 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883238 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883379 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:46 crc kubenswrapper[4769]: E0122 13:45:46.009560 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 13:45:46 crc kubenswrapper[4769]: I0122 13:45:46.883485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:46 crc kubenswrapper[4769]: E0122 13:45:46.883895 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:46 crc kubenswrapper[4769]: I0122 13:45:46.884142 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.578805 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.579189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3"} Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882932 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.882970 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882936 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.883073 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.883206 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:48 crc kubenswrapper[4769]: I0122 13:45:48.882506 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:48 crc kubenswrapper[4769]: E0122 13:45:48.882670 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883278 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883410 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883598 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:50 crc kubenswrapper[4769]: I0122 13:45:50.882575 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:50 crc kubenswrapper[4769]: E0122 13:45:50.883608 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882755 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882745 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882913 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885194 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885231 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885436 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885496 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.882950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.886295 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.886978 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.152614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.205601 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.206435 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.207232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.207909 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211254 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211295 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211714 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.212550 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.213924 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.214984 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.224095 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.224461 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225203 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225486 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225345 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225938 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.227340 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.240813 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241313 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241971 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242186 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242496 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242669 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243377 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243948 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243975 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.245480 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.245902 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.246249 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.247373 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.249857 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.250669 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.250989 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.259868 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260508 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260893 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260919 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.261857 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262286 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262475 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262846 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.263344 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264008 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264129 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264129 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264711 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264966 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265466 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265745 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265045 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266156 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266239 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266419 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266683 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.267092 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.267186 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.269723 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270077 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270416 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270744 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.271311 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.271580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272044 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272287 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272532 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272940 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273064 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273324 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273418 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273585 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273735 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274004 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274160 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274268 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274362 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274464 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274564 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273752 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275175 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275166 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276276 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275359 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275429 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275473 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275629 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275667 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276776 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.304974 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.277343 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280636 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280780 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280860 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280977 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281009 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281203 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281245 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281302 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281346 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281395 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281392 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281436 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281435 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281474 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281510 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281511 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.299411 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.299487 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.302478 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.303015 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.319316 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.321527 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.321669 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.322482 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.322698 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.323361 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325168 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325500 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.326730 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327218 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327391 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327648 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327714 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328087 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328168 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328359 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328445 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328530 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328622 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328700 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328763 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328855 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.332932 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333414 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333483 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333570 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333645 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333722 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334012 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.329140 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334163 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334197 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334212 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334227 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.329203 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334084 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335054 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335013 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335130 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335464 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335622 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335761 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335842 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335912 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336091 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336434 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336598 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336692 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337018 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337250 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337335 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337402 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337465 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337528 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337601 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337676 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337825 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338017 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338304 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338495 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338604 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339066 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339278 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339523 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339643 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339775 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339978 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340300 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339580 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340773 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339973 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340425 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.341524 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342144 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342279 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342474 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344430 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343876 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343606 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344582 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343711 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343829 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345065 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pb7qw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344167 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345909 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.349922 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.350555 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.364677 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.368656 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.368903 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371313 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371822 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371978 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.377018 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.379565 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.379760 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.381206 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.390479 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.390625 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.391165 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.391958 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.392874 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.393467 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.393988 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.395658 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.396642 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.400471 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.401147 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.401643 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.402834 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.403030 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.403978 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.404146 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.404710 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.405503 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.406107 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.407829 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.408592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.408949 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.409552 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.409711 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.410227 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.411238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.411432 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.412438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.412669 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.415551 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.418331 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.419646 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.421288 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.424242 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.428105 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.428145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.429828 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.436587 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.436634 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.437569 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.438803 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.439865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.444503 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.445884 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446585 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446690 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446814 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446883 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446902 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446925 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446964 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447066 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447095 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447120 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447143 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447232 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447283 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447385 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447408 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447430 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447489 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447494 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447578 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447598 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447642 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447689 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447733 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447776 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447817 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447838 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447904 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447926 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447950 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448073 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448105 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448106 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448121 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448138 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448169 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448267 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448291 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448313 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448332 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448352 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448466 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448488 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448537 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448548 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448602 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448629 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448651 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448673 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448692 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448714 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448758 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449023 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449151 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449198 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449220 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449337 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449365 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449900 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.450053 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.451406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.451642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452568 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.453287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.453371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.454860 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455084 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455373 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455579 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455611 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455657 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455715 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.468303 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.468907 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.469287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.469667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470288 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471048 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471112 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471320 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471659 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472637 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473514 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473769 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474022 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474219 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474639 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475337 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475640 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475862 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475906 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476370 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476623 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476721 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476868 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.477192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.477880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478760 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.479395 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.481294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483135 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483564 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483938 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.484429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.485037 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.486022 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.487031 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.488134 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.489446 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.489979 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.491007 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.492084 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.493100 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.494077 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ggj4q"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.494598 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.495063 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.496061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.496453 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.497483 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.498467 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.499716 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.500358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.500715 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.516580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.530477 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550070 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550139 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550164 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550213 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550234 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550264 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550288 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550488 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550492 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550507 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550587 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550662 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550684 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550717 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.571363 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.589945 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.610075 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.629664 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.650980 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.670521 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.682927 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.690258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.710574 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.730165 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.750366 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.770855 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.790193 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.794376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.810103 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.814699 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.830077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.851093 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.860331 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.871133 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.891267 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.896088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.911595 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.931480 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.945928 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.950919 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.953759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.970462 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.990776 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.010381 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.030706 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.051033 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.070318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.091108 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.110031 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.130116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.151693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.170612 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.190274 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.210065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.216731 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.230667 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.250966 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.256734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.270947 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.290693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.311105 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.331218 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.350415 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.371082 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.390410 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.409235 4769 request.go:700] Waited for 1.014828145s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.410982 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.441102 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.451280 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.471616 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.491846 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.511205 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.530529 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551333 4769 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551415 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config podName:db199c04-6231-46b3-a4e7-5cd74604b005 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051396758 +0000 UTC m=+136.462506687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-28gzs" (UID: "db199c04-6231-46b3-a4e7-5cd74604b005") : failed to sync configmap cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551414 4769 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.551481 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551511 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert podName:e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051485592 +0000 UTC m=+136.462595521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-jr9vm" (UID: "e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43") : failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551434 4769 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551561 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert podName:db199c04-6231-46b3-a4e7-5cd74604b005 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051553184 +0000 UTC m=+136.462663263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-28gzs" (UID: "db199c04-6231-46b3-a4e7-5cd74604b005") : failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.571039 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.592281 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.612762 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.630870 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.650116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.671315 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.691167 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.710879 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.730415 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.751127 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.771918 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.791185 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.811197 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.830035 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.850167 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.878509 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.891270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.910210 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.931614 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.950609 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.970738 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.991863 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.010321 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.029744 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.050923 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.070941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071196 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071231 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071574 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.073330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.078722 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.084249 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.143214 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.169328 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.170701 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.183526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.201229 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.214671 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.229332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.259644 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.271291 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.288372 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.308994 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.317309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.324511 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.327091 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.353777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.368968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.375626 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.385664 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.405397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.424267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.425968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.428858 4769 request.go:700] Waited for 1.952837447s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.428968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.445962 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.461358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.465570 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.470702 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.489355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.490427 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.494001 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.497624 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.510238 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.510562 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.518938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.529915 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.533121 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.537804 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.549485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.551705 4769 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.570574 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.590510 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.613398 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.617753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" event={"ID":"43448f45-644f-4b5a-aa06-567b5c8f8279","Type":"ContainerStarted","Data":"4684ed1cfcb96270523a6a8d7bd57101ca77a0e2ffbd8a1ed6db94460013be10"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.618777 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mgft7" event={"ID":"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048","Type":"ContainerStarted","Data":"2faa745b588ac7a75553576155a5f95f83d99449a4fa8e63ecfe096f528d750f"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.619923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerStarted","Data":"a1034525d97a28712dee57e0fe1cf0efc3208802ef24e184494647e1aacdd31a"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.624638 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.628865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.629242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.630273 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.655709 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.673866 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.680104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.686891 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.706743 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.724532 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.744025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:57 crc kubenswrapper[4769]: W0122 13:45:57.753149 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a96247_d002_4f96_9695_16a4011f3ad5.slice/crio-72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9 WatchSource:0}: Error finding container 72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9: Status 404 returned error can't find the container with id 72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9 Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.768287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.785706 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.801070 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.809162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894263 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894373 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894437 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894465 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894487 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894551 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894602 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894676 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894697 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894720 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894765 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894808 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894835 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894881 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894924 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894944 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894979 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.895028 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.895050 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898663 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898825 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898912 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899155 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899708 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899737 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899854 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899877 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899952 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899999 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.900018 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: E0122 13:45:57.901700 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.401688235 +0000 UTC m=+137.812798164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.944374 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.953014 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.974182 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.977980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.984402 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:57 crc kubenswrapper[4769]: W0122 13:45:57.992171 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b0fa7ff_24c4_431c_bc35_87f9483d5c70.slice/crio-99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94 WatchSource:0}: Error finding container 99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94: Status 404 returned error can't find the container with id 99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002642 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002890 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002943 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002964 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002985 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003058 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003101 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003125 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003150 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003226 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003254 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003310 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003401 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003425 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003451 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003502 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003526 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003548 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003571 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003592 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003628 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003674 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003696 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003720 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003746 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003787 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003894 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003959 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004018 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004043 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004157 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004274 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004379 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004400 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004423 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004514 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004535 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004555 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004571 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.007744 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013069 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.013917 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.513883402 +0000 UTC m=+137.924993361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.017725 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.018055 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.018233 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019181 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019411 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019725 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.021354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.022643 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.022859 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.023184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.023376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.025166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.025832 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.026009 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.026689 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030897 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.035024 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.038635 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.039912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040705 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.041086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.042643 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.042716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.044286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.045538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.053679 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.053825 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.054592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.057263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.062613 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.068606 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.071339 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.080632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.080689 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.087365 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.090117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105371 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105573 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105608 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105631 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105650 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105678 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105693 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.106444 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108693 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108701 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108816 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.109163 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.609145594 +0000 UTC m=+138.020255523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.109177 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.109219 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.110110 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.116239 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.120763 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.128318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.148782 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.158419 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.168384 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.185562 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.207563 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.207969 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.707952163 +0000 UTC m=+138.119062092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.210513 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.226287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.238195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.243613 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.245404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.266274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.284327 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.284957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.289446 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c5cf556_ec03_4f29_94ed_13a58f54275c.slice/crio-9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c WatchSource:0}: Error finding container 9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c: Status 404 returned error can't find the container with id 9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.295614 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.299446 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.303117 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312569 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312837 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.313967 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.813948609 +0000 UTC m=+138.225058558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.331026 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f284ae_bace_4bd8_8140_7f37fbad55d4.slice/crio-15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51 WatchSource:0}: Error finding container 15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51: Status 404 returned error can't find the container with id 15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.332109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.338509 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5758b1f6_5135_428d_ad0b_6892a49d1800.slice/crio-20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b WatchSource:0}: Error finding container 20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b: Status 404 returned error can't find the container with id 20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.342828 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.363488 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.364769 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.383966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.385744 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.397644 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.414271 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.91424785 +0000 UTC m=+138.325357779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.414300 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.414854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.415237 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.915222617 +0000 UTC m=+138.326332546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.416003 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.423093 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.429409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.429624 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.431385 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.441432 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.444088 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.445366 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.459499 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.485916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.487273 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.492893 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.494117 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.504314 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.516453 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.516930 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.016895624 +0000 UTC m=+138.428005553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.594941 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.597335 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81769776_c586_45a0_a9ed_42ce4789bb28.slice/crio-cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c WatchSource:0}: Error finding container cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c: Status 404 returned error can't find the container with id cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.599768 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.617901 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.618241 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.118224283 +0000 UTC m=+138.529334212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.624755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" event={"ID":"5758b1f6-5135-428d-ad0b-6892a49d1800","Type":"ContainerStarted","Data":"20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.625442 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" event={"ID":"db199c04-6231-46b3-a4e7-5cd74604b005","Type":"ContainerStarted","Data":"3dac4c0e616238b9276a10434a08b75a4f898abecefb5d865798f5f9f871c1f7"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.626208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" event={"ID":"52f284ae-bace-4bd8-8140-7f37fbad55d4","Type":"ContainerStarted","Data":"15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.629710 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.630852 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mgft7" event={"ID":"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048","Type":"ContainerStarted","Data":"97a1a62427a3ec2a73662c2575862cfebc5a1a3859d4927655a3699ac711d789"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.631011 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.633189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"3c6a93df69f7d8a756e66110d110588adfe6fa5f2e4b4ad92ae0ff8ad10e7d7e"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.634605 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.634639 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.639807 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerStarted","Data":"99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.644072 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.644473 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pb7qw" event={"ID":"5c5cf556-ec03-4f29-94ed-13a58f54275c","Type":"ContainerStarted","Data":"9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.649072 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" event={"ID":"ce7607b6-0e74-47ba-8875-057821862224","Type":"ContainerStarted","Data":"5da20624d5c68fcb1a8c77977639b7cf7fea8fff2cff28af01f29f1b37b182e7"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.651268 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" event={"ID":"c1a96247-d002-4f96-9695-16a4011f3ad5","Type":"ContainerStarted","Data":"0c11eb654fab27ccff28103bc5868b950df871a3ff861f98804806ad409a7f1b"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.651294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" event={"ID":"c1a96247-d002-4f96-9695-16a4011f3ad5","Type":"ContainerStarted","Data":"72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.653564 4769 generic.go:334] "Generic (PLEG): container finished" podID="40076fe2-006c-4dc7-ac7c-71fa27c9bb7d" containerID="459a9f471127a040b63915fd86a2c1727c19775edc4779622bf444df59d12b72" exitCode=0 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.653606 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerDied","Data":"459a9f471127a040b63915fd86a2c1727c19775edc4779622bf444df59d12b72"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.654695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" event={"ID":"81769776-c586-45a0-a9ed-42ce4789bb28","Type":"ContainerStarted","Data":"cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.655461 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"9557a5ff3f6a65fcc1117417184e0b6084b41c770702d1372de880df0dade92d"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.656228 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"205c5845b5f3bc2b5a7a4133454743ada342c3b43673454d4739b7eb2ee66954"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.656917 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerStarted","Data":"261bd1091a2577bc464771e7c33703e0f325865e92a22082bfb502ff9ac9d6f2"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.657505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"b2e183a2748638f6147b4875fa0815521584060feb7b408b9c34ad657edc5a60"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.659867 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerStarted","Data":"f89c3a362197841c752ab5f3edfa4041e9b516f25543b045b655daf2d5510368"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.660781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"78f838c57c348d24f96e78f988e702c61f7ee98211b60bd96d672316dfde3ae1"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.661555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"eff4648ecb9b16184b1c776dfeea1941a88a960abd565b6c43c161ec06e71187"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.662394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerStarted","Data":"8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.663705 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerStarted","Data":"ecd96351628bb1d50b55482cf0c3518a0cdf7cafe69577c7b0d90695bd293ec5"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.718903 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.719041 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.219024076 +0000 UTC m=+138.630134005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.719656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.720834 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.220811285 +0000 UTC m=+138.631921214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.751854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.776235 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d18d670_f698_4b8c_b6c3_300dc1ed8e46.slice/crio-26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7 WatchSource:0}: Error finding container 26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7: Status 404 returned error can't find the container with id 26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7 Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.776739 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8b75cc3_465e_4542_82ee_4950744e89a0.slice/crio-53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe WatchSource:0}: Error finding container 53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe: Status 404 returned error can't find the container with id 53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.792308 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.820650 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.820999 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.320980641 +0000 UTC m=+138.732090570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.921973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.922759 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.422743401 +0000 UTC m=+138.833853330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924933 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924968 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.026355 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.026844 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.526827045 +0000 UTC m=+138.937936974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.031278 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.133996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.134886 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.634873599 +0000 UTC m=+139.045983528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.235431 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.235827 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.735783976 +0000 UTC m=+139.146893905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.328160 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.345987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.346314 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.846300057 +0000 UTC m=+139.257409986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.422826 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.447892 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.448094 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.948069407 +0000 UTC m=+139.359179336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.448186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.448501 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.948484999 +0000 UTC m=+139.359594928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.549130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.550467 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.050449064 +0000 UTC m=+139.461558993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.561647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.562457 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.062440154 +0000 UTC m=+139.473550083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.665934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.666268 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.166254461 +0000 UTC m=+139.577364380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.749173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"509f5511eb5e1404c2cd76e0c51c68ffc6dabc6c95a6aa3ff66e728a8b25495c"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.755245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"bec64279395a6d602d01ad63df5c1e5e8eced06e90e4c03ac8f551be10f43226"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.766905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.767346 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.267333162 +0000 UTC m=+139.678443091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.771638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"453cb9ed2a92ecaf90bedbb493b80ff3312c834885f3d4755fc067c4850f3079"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.771752 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" podStartSLOduration=120.771738323 podStartE2EDuration="2m0.771738323s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.766276014 +0000 UTC m=+139.177385963" watchObservedRunningTime="2026-01-22 13:45:59.771738323 +0000 UTC m=+139.182848252" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.774342 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggj4q" event={"ID":"eed71162-446a-4681-a3a8-23247149532c","Type":"ContainerStarted","Data":"d938ce0bb72b2efdb480ab7e0796f80b8ac474cf537d2f8f3ef5b60cbdb8cb24"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.777016 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" event={"ID":"43448f45-644f-4b5a-aa06-567b5c8f8279","Type":"ContainerStarted","Data":"5eeedc28e52cdba16a36873cad58d79a10aa01c7ed135179a7685905d0788436"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.815194 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" event={"ID":"9ddda125-6c9a-4546-901a-a32dd6e99251","Type":"ContainerStarted","Data":"5ab0d3cd3aa8ae56bdc7febada89aec58f1ae7ffcadd3dac76a290873e9339bc"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.868364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" event={"ID":"5758b1f6-5135-428d-ad0b-6892a49d1800","Type":"ContainerStarted","Data":"61eff18189b6c9a1bd08ccc0a7ab9b189d05340bdea3984317c2adc4a1aa747e"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.869739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.870389 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.872166 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.372127366 +0000 UTC m=+139.783237295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.878076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.887057 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.387034546 +0000 UTC m=+139.798144475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.887885 4769 patch_prober.go:28] interesting pod/console-operator-58897d9998-2vm4g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.887976 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" podUID="5758b1f6-5135-428d-ad0b-6892a49d1800" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.996427 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:45:59.997821 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.497782744 +0000 UTC m=+139.908892673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:45:59.999673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mgft7" podStartSLOduration=119.999653905 podStartE2EDuration="1m59.999653905s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.945750352 +0000 UTC m=+139.356860291" watchObservedRunningTime="2026-01-22 13:45:59.999653905 +0000 UTC m=+139.410763834" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.009466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" podStartSLOduration=120.009444525 podStartE2EDuration="2m0.009444525s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.99910647 +0000 UTC m=+139.410216409" watchObservedRunningTime="2026-01-22 13:46:00.009444525 +0000 UTC m=+139.420554454" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.017543 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.027327 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"c437a788f729ec1c74235c0c86ed4e15424a790ae709346c3620566dfd2a5bb2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.030455 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.044257 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" podStartSLOduration=120.044239812 podStartE2EDuration="2m0.044239812s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.042611747 +0000 UTC m=+139.453721686" watchObservedRunningTime="2026-01-22 13:46:00.044239812 +0000 UTC m=+139.455349741" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.063593 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerStarted","Data":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.064637 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.097433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.097827 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.597815186 +0000 UTC m=+140.008925115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.146974 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3640120_a52b_4ee5_aacb_83df135f0470.slice/crio-aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449 WatchSource:0}: Error finding container aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449: Status 404 returned error can't find the container with id aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.147461 4769 generic.go:334] "Generic (PLEG): container finished" podID="81a5be64-af9a-4376-9105-c36371ad5069" containerID="4afe25e720ceb6da4ecc630fdedcd4ab4b8cac879f3f07359c5cf335ae32aa65" exitCode=0 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.147566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerDied","Data":"4afe25e720ceb6da4ecc630fdedcd4ab4b8cac879f3f07359c5cf335ae32aa65"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.198832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.200044 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.700028169 +0000 UTC m=+140.111138098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.222249 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" podStartSLOduration=121.22222894 podStartE2EDuration="2m1.22222894s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.147665668 +0000 UTC m=+139.558775597" watchObservedRunningTime="2026-01-22 13:46:00.22222894 +0000 UTC m=+139.633338869" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.239777 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"88d2dabc1f7f8d4e6bab567d6454ab8cf35439d88628883475f54f7bea23bfa6"} Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.243902 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9c7f00_95b3_4453_8d82_df8b88a2bc8a.slice/crio-fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea WatchSource:0}: Error finding container fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea: Status 404 returned error can't find the container with id fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.250070 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" event={"ID":"7d18d670-f698-4b8c-b6c3-300dc1ed8e46","Type":"ContainerStarted","Data":"26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.316073 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerStarted","Data":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.317440 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.321949 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.322274 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.822261853 +0000 UTC m=+140.233371782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.339315 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.348859 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.395398 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pb7qw" event={"ID":"5c5cf556-ec03-4f29-94ed-13a58f54275c","Type":"ContainerStarted","Data":"83747314671fa6f7c1a40e183a9a83e1df752bb3f15a71c3441472c55ff2deb5"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.423982 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerStarted","Data":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.424814 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" podStartSLOduration=120.424783964 podStartE2EDuration="2m0.424783964s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.355386224 +0000 UTC m=+139.766496153" watchObservedRunningTime="2026-01-22 13:46:00.424783964 +0000 UTC m=+139.835893893" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.425010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.426598 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.926574423 +0000 UTC m=+140.337684352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.452404 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.460848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.473623 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.481985 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" event={"ID":"2f88820f-4a65-4799-86f7-19be89871165","Type":"ContainerStarted","Data":"28a643e809f090fe88ab01fc428a29d61e7801c79ddd459639f8aa0d1379afd2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.492483 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nwrtw" podStartSLOduration=120.492466016 podStartE2EDuration="2m0.492466016s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.481739491 +0000 UTC m=+139.892849420" watchObservedRunningTime="2026-01-22 13:46:00.492466016 +0000 UTC m=+139.903575945" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.514785 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pb7qw" podStartSLOduration=120.514765799 podStartE2EDuration="2m0.514765799s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.513447363 +0000 UTC m=+139.924557292" watchObservedRunningTime="2026-01-22 13:46:00.514765799 +0000 UTC m=+139.925875728" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.532857 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.535060 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.035042278 +0000 UTC m=+140.446152217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.561623 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.613559 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerStarted","Data":"2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.614561 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.625623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.625711 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbc7f2a_fce4_4747_9a96_1fc4631a6197.slice/crio-feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b WatchSource:0}: Error finding container feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b: Status 404 returned error can't find the container with id feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.641477 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.646044 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.646595 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.146578606 +0000 UTC m=+140.557688535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.661391 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" event={"ID":"db7a69ec-2a82-4f9b-b83a-42237a02087e","Type":"ContainerStarted","Data":"41bdbc90f71424027e07662dd5bcb107d909154091d0ce9d7b455121ca3b97d2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.688034 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.696901 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" podStartSLOduration=120.69687863 podStartE2EDuration="2m0.69687863s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.684428118 +0000 UTC m=+140.095538057" watchObservedRunningTime="2026-01-22 13:46:00.69687863 +0000 UTC m=+140.107988559" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.735482 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" event={"ID":"ce7607b6-0e74-47ba-8875-057821862224","Type":"ContainerStarted","Data":"eea67f3441e94075454fb0c7d3a96d5408a510a886b13c7d5270615e57e2b2ea"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.751536 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.753191 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.253175539 +0000 UTC m=+140.664285468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.765052 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" podStartSLOduration=120.765033976 podStartE2EDuration="2m0.765033976s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.764686757 +0000 UTC m=+140.175796686" watchObservedRunningTime="2026-01-22 13:46:00.765033976 +0000 UTC m=+140.176143905" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.775321 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.784832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"c411f7d050d1c510d6717211a8877f15f6ed19c31db25f69102617eb577b294f"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.824578 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.829492 4769 generic.go:334] "Generic (PLEG): container finished" podID="15723c66-27d3-4cea-9962-e75bbe7bb967" containerID="15f6c90aff91cd7860e436fa3cbf2c39646fed4974a607821e1a18f1fb00afb3" exitCode=0 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.829614 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerDied","Data":"15f6c90aff91cd7860e436fa3cbf2c39646fed4974a607821e1a18f1fb00afb3"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.844979 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.852771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.854102 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.354076697 +0000 UTC m=+140.765186636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.913329 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.913702 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.914911 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" podStartSLOduration=120.91489613 podStartE2EDuration="2m0.91489613s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.878413656 +0000 UTC m=+140.289523585" watchObservedRunningTime="2026-01-22 13:46:00.91489613 +0000 UTC m=+140.326006059" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.922947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" event={"ID":"52f284ae-bace-4bd8-8140-7f37fbad55d4","Type":"ContainerStarted","Data":"e8ead6bae50748969fc2453d09ac55f1d0078c3154caa7084217fea93504125c"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.956738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.958526 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.45851434 +0000 UTC m=+140.869624269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.987230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" podStartSLOduration=121.9872135 podStartE2EDuration="2m1.9872135s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.985121042 +0000 UTC m=+140.396230961" watchObservedRunningTime="2026-01-22 13:46:00.9872135 +0000 UTC m=+140.398323419" Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.001634 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:01 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.001682 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.006928 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:01 crc kubenswrapper[4769]: W0122 13:46:01.034306 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a409b5_e519_4c64_bc56_0b74757f2181.slice/crio-71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8 WatchSource:0}: Error finding container 71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8: Status 404 returned error can't find the container with id 71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8 Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.058070 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.059845 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.559772116 +0000 UTC m=+140.970882045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.167615 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.168096 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.668076867 +0000 UTC m=+141.079186796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.271141 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.272449 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.772433008 +0000 UTC m=+141.183542927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.375864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.376718 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.876704598 +0000 UTC m=+141.287814527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.480324 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.480691 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.980677069 +0000 UTC m=+141.391786998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.582743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.583539 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.083522338 +0000 UTC m=+141.494632267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.683746 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.684858 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.184825246 +0000 UTC m=+141.595935175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.787067 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.787534 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.287522342 +0000 UTC m=+141.698632271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.887953 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.888209 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.388195482 +0000 UTC m=+141.799305411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.984495 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qtks" event={"ID":"e3640120-a52b-4ee5-aacb-83df135f0470","Type":"ContainerStarted","Data":"6a83b639b8e7281d2f38a8a13bd2d8cd0b3009fbfbe3619a1b641f3078427312"} Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.984547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qtks" event={"ID":"e3640120-a52b-4ee5-aacb-83df135f0470","Type":"ContainerStarted","Data":"aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449"} Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.988663 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.989030 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.489016637 +0000 UTC m=+141.900126566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.999282 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:01 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.999330 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.016830 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"a01d6813d3fe5033788bcf79e424d8c24edd16d648cb0d91df6cac2ad7e87721"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.033842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"edf774cad918d0c903e63356c9349f3c9982e1a39088a1428250a648b8d006ca"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.035632 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5qtks" podStartSLOduration=7.035613258 podStartE2EDuration="7.035613258s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.034132428 +0000 UTC m=+141.445242357" watchObservedRunningTime="2026-01-22 13:46:02.035613258 +0000 UTC m=+141.446723187" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.067688 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" event={"ID":"7d18d670-f698-4b8c-b6c3-300dc1ed8e46","Type":"ContainerStarted","Data":"50387bd8f7a7a56be6825d4bf66c471d92b09523500cbb6eeae67922844fcff8"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.069840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.071080 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" podStartSLOduration=122.071064205 podStartE2EDuration="2m2.071064205s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.069318986 +0000 UTC m=+141.480428925" watchObservedRunningTime="2026-01-22 13:46:02.071064205 +0000 UTC m=+141.482174134" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.095735 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.096250 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.096659 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.596645888 +0000 UTC m=+142.007755817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.097854 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" event={"ID":"0335a481-e6c1-459c-8325-5da8dfcbcdb1","Type":"ContainerStarted","Data":"1312b26fad537147167a9183728704277b377f20b6e7d69dec973c8bdfb320c3"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.125577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"4a1bb7cc56593bc750b8f9678ba5779bcfaf3e70fccaf589fdf7d1db5a9ec23a"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.151515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"0d1cd2b147b83a98349400c6c23230d428ea76258737f8db5d5de0ced500e378"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.154302 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" podStartSLOduration=122.154290074 podStartE2EDuration="2m2.154290074s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.122547541 +0000 UTC m=+141.533657480" watchObservedRunningTime="2026-01-22 13:46:02.154290074 +0000 UTC m=+141.565400003" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.169392 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"dfe32d4afb4e757cc4ea729d697a684a0cdcb0a6f0a9f678263b28d7d9d302e7"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.188490 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" podStartSLOduration=122.188474936 podStartE2EDuration="2m2.188474936s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.187843518 +0000 UTC m=+141.598953457" watchObservedRunningTime="2026-01-22 13:46:02.188474936 +0000 UTC m=+141.599584865" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.197653 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.199244 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.699228701 +0000 UTC m=+142.110338630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.229865 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"fd9612b476f5c956cf00dfe340da5ee61e6237647e1343b0c7cb59eac9b9cf95"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.255090 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" event={"ID":"10a252bf-8be9-40ee-9632-4abbb989e43d","Type":"ContainerStarted","Data":"bd846bbe321a9c4a59f95e0d2f83926c0b9add9d4e63dd1548a328273a6a4325"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.256182 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.258061 4769 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-98pt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.258111 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" podUID="10a252bf-8be9-40ee-9632-4abbb989e43d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.275578 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" event={"ID":"9ddda125-6c9a-4546-901a-a32dd6e99251","Type":"ContainerStarted","Data":"3fe7ed6bf9a7623cc09d67823716b944198862d8419f9034e99517884a922e59"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.291372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.293009 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" podStartSLOduration=122.293000591 podStartE2EDuration="2m2.293000591s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.290941325 +0000 UTC m=+141.702051254" watchObservedRunningTime="2026-01-22 13:46:02.293000591 +0000 UTC m=+141.704110520" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.299593 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerStarted","Data":"e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.299645 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerStarted","Data":"b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.300519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.301548 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.801523016 +0000 UTC m=+142.212632945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.343707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" event={"ID":"2f88820f-4a65-4799-86f7-19be89871165","Type":"ContainerStarted","Data":"b6358b93440dba2ed8bbc2419f31a54a25b390f612f05e88cc62cf454c483e9b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.343932 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" podStartSLOduration=122.343922153 podStartE2EDuration="2m2.343922153s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.341055154 +0000 UTC m=+141.752165083" watchObservedRunningTime="2026-01-22 13:46:02.343922153 +0000 UTC m=+141.755032072" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.373140 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerStarted","Data":"c60747a367f969aba8431d1264c3b06d853ad4743d25e5c6f5da73610a6a897d"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.374926 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.379875 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.381449 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.397609 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.401507 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" podStartSLOduration=62.401487847 podStartE2EDuration="1m2.401487847s" podCreationTimestamp="2026-01-22 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.384669854 +0000 UTC m=+141.795779793" watchObservedRunningTime="2026-01-22 13:46:02.401487847 +0000 UTC m=+141.812597776" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.423937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.424225 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.924213632 +0000 UTC m=+142.335323561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.431690 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.432628 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"3590631e56908a9ec4b152769bca64d1042fd53ef303711e9fab3815b0bc646b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.438395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"4b585d21514770d3f9c2306b537095fb371b963944e17fb1c137e5b0bd19f513"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.439283 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"3d38ffa8eb97c9b33acb30873648d0ba5c2c602e82463423c16aac06152bf76f"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.439719 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" podStartSLOduration=122.439698998 podStartE2EDuration="2m2.439698998s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.428047828 +0000 UTC m=+141.839157757" watchObservedRunningTime="2026-01-22 13:46:02.439698998 +0000 UTC m=+141.850808927" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.447998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.450154 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.451872 4769 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.451923 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.469861 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggj4q" event={"ID":"eed71162-446a-4681-a3a8-23247149532c","Type":"ContainerStarted","Data":"6f6986368046f4813dd2f28239dc1bb2b0290e5232a2447970e0a6898b2a4cdd"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.492441 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" event={"ID":"e9a409b5-e519-4c64-bc56-0b74757f2181","Type":"ContainerStarted","Data":"71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.508389 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" event={"ID":"db7a69ec-2a82-4f9b-b83a-42237a02087e","Type":"ContainerStarted","Data":"a7af6a04e7cd4d5c52b9ee75410182cb2ee12111f77496e96fb1c9b65cf071ec"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.519581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"2df7dfe6b8cd6f8ac6ce3bca874c3990aabcfba5a1e0b6f7e4e698bc7ac687ef"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.519638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"e2430dba990f43198415f44fd75c50ccb8307e6c6829cb533d4dbeef52eef739"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.521319 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" podStartSLOduration=122.521302884 podStartE2EDuration="2m2.521302884s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.51934831 +0000 UTC m=+141.930458239" watchObservedRunningTime="2026-01-22 13:46:02.521302884 +0000 UTC m=+141.932412803" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525490 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525733 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525905 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.526021 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.026002723 +0000 UTC m=+142.437112652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.539538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" event={"ID":"db199c04-6231-46b3-a4e7-5cd74604b005","Type":"ContainerStarted","Data":"1783eb565dba674e2215e780ddfb9a85c4591980102182b29cf78e91f7baeb4b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.548762 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podStartSLOduration=122.548745139 podStartE2EDuration="2m2.548745139s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.548218124 +0000 UTC m=+141.959328073" watchObservedRunningTime="2026-01-22 13:46:02.548745139 +0000 UTC m=+141.959855078" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.562869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" event={"ID":"153c6af8-5ac1-4256-ad20-992ad604c61b","Type":"ContainerStarted","Data":"4787a7edab73af9b0c9225ffaebb8d363b75f78cc5c3797c1b9178c04d12396b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.562911 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" event={"ID":"153c6af8-5ac1-4256-ad20-992ad604c61b","Type":"ContainerStarted","Data":"5874e352d894d8b1ca7ab3f6d108eb339801a41c4b601856bf6a2acb1cfda348"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.566378 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.570944 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.571026 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.572381 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.591889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" event={"ID":"81769776-c586-45a0-a9ed-42ce4789bb28","Type":"ContainerStarted","Data":"fda6bd1004f0814284e3117f625ff12c08d18ec3c6e15a79839178425b5b3107"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.593294 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622180 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"7a1b23df60e71e322b4eb4aaade21c96a3fb3ac691e403b084152b410f28c70a"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622226 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"d4b75c72c9393a99da0514694605c5e1c9e01efa513b9e7993959024dc8d095e"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622269 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.623079 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ggj4q" podStartSLOduration=7.623061174 podStartE2EDuration="7.623061174s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.593189192 +0000 UTC m=+142.004299111" watchObservedRunningTime="2026-01-22 13:46:02.623061174 +0000 UTC m=+142.034171103" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.624140 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627600 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627784 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.630625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.632712 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.132697069 +0000 UTC m=+142.543806998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.633303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.656368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" event={"ID":"1fbc7f2a-fce4-4747-9a96-1fc4631a6197","Type":"ContainerStarted","Data":"feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.658083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.663601 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.681581 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" podStartSLOduration=122.681560944 podStartE2EDuration="2m2.681560944s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.681168702 +0000 UTC m=+142.092278641" watchObservedRunningTime="2026-01-22 13:46:02.681560944 +0000 UTC m=+142.092670893" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.685161 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" podStartSLOduration=123.685152552 podStartE2EDuration="2m3.685152552s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.624147794 +0000 UTC m=+142.035257723" watchObservedRunningTime="2026-01-22 13:46:02.685152552 +0000 UTC m=+142.096262491" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.729691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730091 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730112 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.731134 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.231114507 +0000 UTC m=+142.642224436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.743708 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" podStartSLOduration=122.743691483 podStartE2EDuration="2m2.743691483s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.703513307 +0000 UTC m=+142.114623236" watchObservedRunningTime="2026-01-22 13:46:02.743691483 +0000 UTC m=+142.154801412" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.745193 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" podStartSLOduration=122.745188284 podStartE2EDuration="2m2.745188284s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.741919865 +0000 UTC m=+142.153029794" watchObservedRunningTime="2026-01-22 13:46:02.745188284 +0000 UTC m=+142.156298213" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.769749 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.771018 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.776751 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" podStartSLOduration=122.776733593 podStartE2EDuration="2m2.776733593s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.77518272 +0000 UTC m=+142.186292659" watchObservedRunningTime="2026-01-22 13:46:02.776733593 +0000 UTC m=+142.187843522" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.820705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.841522 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842221 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842378 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.843043 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.343028707 +0000 UTC m=+142.754138636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.873253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.874082 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.930230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" podStartSLOduration=122.930207905 podStartE2EDuration="2m2.930207905s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.919257334 +0000 UTC m=+142.330367263" watchObservedRunningTime="2026-01-22 13:46:02.930207905 +0000 UTC m=+142.341317834" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.939695 4769 csr.go:261] certificate signing request csr-gj856 is approved, waiting to be issued Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.939842 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.958872 4769 csr.go:257] certificate signing request csr-gj856 is issued Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.959305 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960088 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960221 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960251 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.960412 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.460399317 +0000 UTC m=+142.871509246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960688 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.979019 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" podStartSLOduration=122.978999008 podStartE2EDuration="2m2.978999008s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.96815626 +0000 UTC m=+142.379266189" watchObservedRunningTime="2026-01-22 13:46:02.978999008 +0000 UTC m=+142.390108927" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.999099 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006083 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" podStartSLOduration=123.006066513 podStartE2EDuration="2m3.006066513s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.002616068 +0000 UTC m=+142.413725997" watchObservedRunningTime="2026-01-22 13:46:03.006066513 +0000 UTC m=+142.417176442" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006864 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:03 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006937 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.033161 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.055780 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.061768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062211 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062282 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062357 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062495 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062848 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.063654 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.064159 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.564147571 +0000 UTC m=+142.975257500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.064818 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.100984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166438 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166832 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.167223 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.667193867 +0000 UTC m=+143.078303796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.167406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.167840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.184256 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.204354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.269750 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.270082 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.770069248 +0000 UTC m=+143.181179177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.370703 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.371116 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.871096708 +0000 UTC m=+143.282206637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.372739 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.385859 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.476213 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.477717 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.977700661 +0000 UTC m=+143.388810590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.585289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.585399 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.085380194 +0000 UTC m=+143.496490133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.585558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.585984 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.085976471 +0000 UTC m=+143.497086400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.609783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.689331 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.689709 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.189695505 +0000 UTC m=+143.600805434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.724745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"ca83742f3ffbd2cbede8c2894a0b9fa6eb0b873be05c34d082d77e936acb6ff4"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.736173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"cc790897f2c03cd237b709a253cb8feb60b3f8c8e7eec02f6850961c5370fd8c"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.760078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" event={"ID":"e9a409b5-e519-4c64-bc56-0b74757f2181","Type":"ContainerStarted","Data":"9e3cb59eace57b4102e496545a698e917cd834c618e16e3081ae1ebd33ad7120"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.784319 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.790484 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.790555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"382ca9326aade27f0aab2053e02cd05727dacd8574e389c25ff24d8ec2837257"} Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.790849 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.290778316 +0000 UTC m=+143.701888245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: W0122 13:46:03.813414 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d9e80ce_c46e_4a99_814e_0d9b1b65623f.slice/crio-87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc WatchSource:0}: Error finding container 87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc: Status 404 returned error can't find the container with id 87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.827689 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" podStartSLOduration=123.827671581 podStartE2EDuration="2m3.827671581s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.826970192 +0000 UTC m=+143.238080121" watchObservedRunningTime="2026-01-22 13:46:03.827671581 +0000 UTC m=+143.238781510" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.838137 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"5df2bcc88c3539be053859b0ed4af5a02ecc6750223637d5feb1f5a2787fbabb"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.838173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"045108c3c52dc6747143cb77f27616fa92adce96e5936b3f02feae3c1494b215"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.883430 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"01a3f246379213d65b9a158fbe47e4a4c4c2be6de6c3bcf62110d9d310295640"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.893587 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.895190 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.395168579 +0000 UTC m=+143.806278498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.977749 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 13:41:02 +0000 UTC, rotation deadline is 2026-10-12 20:04:18.388215761 +0000 UTC Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.978076 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6318h18m14.410143151s for next certificate rotation Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.985967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" event={"ID":"10a252bf-8be9-40ee-9632-4abbb989e43d","Type":"ContainerStarted","Data":"629d0b002a1282938d8599528a95f65e462ecb85cc83338d24c0a454c4c4b054"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.988851 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" podStartSLOduration=123.988827886 podStartE2EDuration="2m3.988827886s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.883814476 +0000 UTC m=+143.294924415" watchObservedRunningTime="2026-01-22 13:46:03.988827886 +0000 UTC m=+143.399937815" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.990829 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.994004 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:03 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.994042 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.997003 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.997300 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.497288508 +0000 UTC m=+143.908398437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.019044 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.082456 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.098507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.100097 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.600074257 +0000 UTC m=+144.011184186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.108695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"a6f1bffc6f3dc034901807a0acbe99c9f655397cff040c51f93bb72fd120f61b"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.120205 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.121559 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.621547618 +0000 UTC m=+144.032657537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.123529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerStarted","Data":"01c918f6a922286d54e6f0f6dd759a743d396e8baa23df0990ec2306e48769b5"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.147106 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" podStartSLOduration=124.14708186 podStartE2EDuration="2m4.14708186s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.135848271 +0000 UTC m=+143.546958200" watchObservedRunningTime="2026-01-22 13:46:04.14708186 +0000 UTC m=+143.558191779" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.165954 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" podStartSLOduration=124.1659314 podStartE2EDuration="2m4.1659314s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.165069155 +0000 UTC m=+143.576179084" watchObservedRunningTime="2026-01-22 13:46:04.1659314 +0000 UTC m=+143.577041349" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.203040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" event={"ID":"1fbc7f2a-fce4-4747-9a96-1fc4631a6197","Type":"ContainerStarted","Data":"4704dec99621c6846e3261de6e333b789fa362c283134a1fab3ae7c38e0c05b3"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.221625 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.222590 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"1ca238b1d6d0149a30a8e8311d14f69bde547eeb40d5660b4b7dd4e246123077"} Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.223466 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.723447102 +0000 UTC m=+144.134557031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.254445 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" podStartSLOduration=124.254427324 podStartE2EDuration="2m4.254427324s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.247569556 +0000 UTC m=+143.658679475" watchObservedRunningTime="2026-01-22 13:46:04.254427324 +0000 UTC m=+143.665537253" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.264249 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"61a3fa95fd4a0fa91ff0dccbe0ce875b95e5770d8cc2831c9dd54d8ce1d26ba6"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.264326 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"3a7d0770e83ad70664df9e69ba0f3f806e8403b2ecdb4a1cf5a3de483a6c5fd6"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.283937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" event={"ID":"0335a481-e6c1-459c-8325-5da8dfcbcdb1","Type":"ContainerStarted","Data":"b0ea037cd7cca93fb3844e9a96e5c8964f5fa0135c19062b7474d17fcd87d1e5"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.288752 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" podStartSLOduration=124.288732559 podStartE2EDuration="2m4.288732559s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.287839914 +0000 UTC m=+143.698949843" watchObservedRunningTime="2026-01-22 13:46:04.288732559 +0000 UTC m=+143.699842488" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.310421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.332593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.332688 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" podStartSLOduration=124.332667417 podStartE2EDuration="2m4.332667417s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.324119622 +0000 UTC m=+143.735229551" watchObservedRunningTime="2026-01-22 13:46:04.332667417 +0000 UTC m=+143.743777346" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.337636 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.837621274 +0000 UTC m=+144.248731203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.376945 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.378220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.381217 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.385932 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434317 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434691 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434713 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.434813 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.934784018 +0000 UTC m=+144.345893947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.536671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537090 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537149 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.537988 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.037973467 +0000 UTC m=+144.449083396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.538558 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.538710 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.571634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.639165 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.639343 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.139318116 +0000 UTC m=+144.550428045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.639498 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.639829 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.139821639 +0000 UTC m=+144.550931568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.739876 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.740071 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.240040547 +0000 UTC m=+144.651150466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.740215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.740659 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.240650314 +0000 UTC m=+144.651760243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.756517 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.757448 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.777747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.804458 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.841218 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.844143 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.34408113 +0000 UTC m=+144.755191059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863668 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863892 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.864298 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.364280896 +0000 UTC m=+144.775390825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964480 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964815 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.965214 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.965280 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.465265735 +0000 UTC m=+144.876375664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.965686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.990119 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:04 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:04 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:04 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.990193 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.999732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.019981 4769 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.068853 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: E0122 13:46:05.069276 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.569257986 +0000 UTC m=+144.980367915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.087404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.170244 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:05 crc kubenswrapper[4769]: E0122 13:46:05.170769 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.6707503 +0000 UTC m=+145.081860229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.238195 4769 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T13:46:05.020019271Z","Handler":null,"Name":""} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.244816 4769 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.244848 4769 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.272291 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.275713 4769 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.277115 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.283857 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:05 crc kubenswrapper[4769]: W0122 13:46:05.285696 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fbf5655_9685_4e15_a6af_41793097be11.slice/crio-a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559 WatchSource:0}: Error finding container a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559: Status 404 returned error can't find the container with id a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.306296 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"3f92cbd07839fbaa3d584c387dc2cafe2802444ba5d5904cc7a5d5ed77b73e8c"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.306334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"5b5ad09fdc86a17007c33355037be7b7436f7222bd66d3af98cfc8a19f27a448"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308333 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"9d4a213a14f5a21b9ecd231875d6aa22cbbfb7d75a58db27a2f98d97feb1dafb"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.310322 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315061 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315129 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315153 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338596 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338681 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338710 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerStarted","Data":"b542c5dbcb707bb656b636afb6aa1bcc3a67f0090bf88281e297bd475aa9bd3f"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.341299 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343499 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343669 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343774 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerStarted","Data":"95901b43f1b0b192d242724acdf435d55c1a459bc7ffc435091c0491b7b2a77a"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.353083 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.367366 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"48f5c380a7ea4ee98b4e34be622cd179a9b205dd6bf31d14cd339a36d7938822"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.367472 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rkk84" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.374814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.381879 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"3cad4256a432d1a1f02170ee5ecd3bf344bb54c0f2b371cbc98acd9bbe0e5542"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.403943 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.463359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.474529 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rkk84" podStartSLOduration=10.474512347 podStartE2EDuration="10.474512347s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:05.465357806 +0000 UTC m=+144.876467745" watchObservedRunningTime="2026-01-22 13:46:05.474512347 +0000 UTC m=+144.885622276" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.474810 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" podStartSLOduration=126.474805576 podStartE2EDuration="2m6.474805576s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:05.450224969 +0000 UTC m=+144.861334918" watchObservedRunningTime="2026-01-22 13:46:05.474805576 +0000 UTC m=+144.885915505" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.722440 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:46:05 crc kubenswrapper[4769]: W0122 13:46:05.729092 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75dcccce_425a_46ab_bfeb_dc5a0ee835d4.slice/crio-65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a WatchSource:0}: Error finding container 65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a: Status 404 returned error can't find the container with id 65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.752464 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.753474 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.756676 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784382 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784513 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784562 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.829782 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885452 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.886933 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.887031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.914775 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.988341 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:05 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:05 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:05 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.988408 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.138424 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.151627 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.152602 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.166061 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190364 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190485 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190567 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.291902 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292016 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292421 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292461 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.314403 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.441748 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerStarted","Data":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.442127 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerStarted","Data":"65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.442164 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.468774 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" podStartSLOduration=126.468754927 podStartE2EDuration="2m6.468754927s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:06.466585037 +0000 UTC m=+145.877694966" watchObservedRunningTime="2026-01-22 13:46:06.468754927 +0000 UTC m=+145.879864866" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.470495 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.474259 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496700 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a" exitCode=0 Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496800 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496848 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerStarted","Data":"6e66e2dbf8bc8a080c55b13a7260516fe1212a4c0154bcf230d5878c8ebeeeed"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.529089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.559029 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"bc5c05abf51e8270472b3dd332fa8bf294f31fb227e2e85b20e544ed47f8d921"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594078 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd" exitCode=0 Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594360 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerStarted","Data":"a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.643477 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" podStartSLOduration=11.643459734 podStartE2EDuration="11.643459734s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:06.643020122 +0000 UTC m=+146.054130061" watchObservedRunningTime="2026-01-22 13:46:06.643459734 +0000 UTC m=+146.054569663" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.910198 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.945413 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: W0122 13:46:06.962004 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143027dc_ac6a_442f_bf57_3dcd7efd0427.slice/crio-eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b WatchSource:0}: Error finding container eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b: Status 404 returned error can't find the container with id eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.988556 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:06 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:06 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:06 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.988608 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.317990 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.318044 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.318882 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.319129 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.490779 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.491230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.497854 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.498219 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.498258 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.505331 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.511575 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.511704 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.513659 4769 patch_prober.go:28] interesting pod/console-f9d7485db-nwrtw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.513725 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.643175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerDied","Data":"e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.643248 4769 generic.go:334] "Generic (PLEG): container finished" podID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerID="e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654066 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654121 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654145 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerStarted","Data":"ab73ea8d8d9a566fef3480c2969fb2296deb50f4ddfdc8ecead203c9dda4e719"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.657915 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.658037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.658055 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.664756 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.665210 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.935368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.935454 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.948618 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.956567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.986446 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.990371 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:07 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:07 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:07 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.990430 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.037243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.037335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.041531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.053066 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.107038 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.134093 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.148636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.183915 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.184694 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.186893 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.190907 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.186718 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.353377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.353726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455574 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.478414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.525490 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.725095 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c99d92123863bc1d707dc9890e2e74fa177cd611a96a52755527862f9ed84368"} Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.998912 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:08 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:08 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:08 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:08.999341 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.172465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.174991 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:09 crc kubenswrapper[4769]: W0122 13:46:09.197195 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poded99cfde_1902_4453_9add_80bcda64e51f.slice/crio-24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20 WatchSource:0}: Error finding container 24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20: Status 404 returned error can't find the container with id 24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20 Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279532 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279618 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.280448 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.293027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.294265 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874" (OuterVolumeSpecName: "kube-api-access-n8874") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "kube-api-access-n8874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381384 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381421 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381471 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.748486 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerStarted","Data":"24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751065 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerDied","Data":"b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751097 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751153 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.756308 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8283411590af6ac01c407eb5eac96c45560649f3eed1ec2d108aacafba468b5c"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.756412 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.766109 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b2337fd96f64c22418ef9b022ca0c9a1e82691be7d47643651c83f901b1b9110"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.775985 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"75eaabaf74ef52dc0ddf7f9dae2d842ae826de4142370d68a79a182670b120fc"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.988031 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:09 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:09 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:09 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.988089 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.481723 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.481842 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.793215 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bc32aa32cf748cd584d5cfeb225a4682c619a4b9f7a5ba38151e4aad68ec7d04"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.808402 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d3cb45eeee556f0f1d0899e75c07fef57250967dace39b43969090ad0ff41dff"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.817563 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerStarted","Data":"00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.879642 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.879624261 podStartE2EDuration="2.879624261s" podCreationTimestamp="2026-01-22 13:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:10.876897846 +0000 UTC m=+150.288007775" watchObservedRunningTime="2026-01-22 13:46:10.879624261 +0000 UTC m=+150.290734190" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.987828 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:10 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:10 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:10 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.987881 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.392694 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:11 crc kubenswrapper[4769]: E0122 13:46:11.394075 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.394092 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.394260 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.395034 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.396686 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.397975 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.398370 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.454439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.454522 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556904 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.609400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.731844 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.826024 4769 generic.go:334] "Generic (PLEG): container finished" podID="ed99cfde-1902-4453-9add-80bcda64e51f" containerID="00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647" exitCode=0 Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.827184 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerDied","Data":"00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647"} Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.987870 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:11 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:11 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:11 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.988256 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:12 crc kubenswrapper[4769]: I0122 13:46:12.022535 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:12 crc kubenswrapper[4769]: W0122 13:46:12.033387 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod36001332_1cc9_44dc_8137_c117c2101ecd.slice/crio-999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b WatchSource:0}: Error finding container 999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b: Status 404 returned error can't find the container with id 999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b Jan 22 13:46:12 crc kubenswrapper[4769]: I0122 13:46:12.835368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerStarted","Data":"999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b"} Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:12.999978 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:13 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:13 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:13 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.000056 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.501037 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rkk84" Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.845160 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerStarted","Data":"d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050"} Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.859263 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.8592394519999997 podStartE2EDuration="2.859239452s" podCreationTimestamp="2026-01-22 13:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:13.85624282 +0000 UTC m=+153.267352749" watchObservedRunningTime="2026-01-22 13:46:13.859239452 +0000 UTC m=+153.270349381" Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.003432 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:14 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.003504 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.872851 4769 generic.go:334] "Generic (PLEG): container finished" podID="36001332-1cc9-44dc-8137-c117c2101ecd" containerID="d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050" exitCode=0 Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.872890 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerDied","Data":"d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050"} Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.993399 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:14 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.993775 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.003947 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:16 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.004023 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.995592 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:16 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.995663 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.322211 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.511509 4769 patch_prober.go:28] interesting pod/console-f9d7485db-nwrtw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.511565 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.987568 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.991054 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.731577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.739414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.904736 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.322300 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.330282 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"36001332-1cc9-44dc-8137-c117c2101ecd\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456063 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"36001332-1cc9-44dc-8137-c117c2101ecd\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456078 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36001332-1cc9-44dc-8137-c117c2101ecd" (UID: "36001332-1cc9-44dc-8137-c117c2101ecd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"ed99cfde-1902-4453-9add-80bcda64e51f\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456208 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"ed99cfde-1902-4453-9add-80bcda64e51f\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456234 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed99cfde-1902-4453-9add-80bcda64e51f" (UID: "ed99cfde-1902-4453-9add-80bcda64e51f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456400 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456410 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.463128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36001332-1cc9-44dc-8137-c117c2101ecd" (UID: "36001332-1cc9-44dc-8137-c117c2101ecd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.463176 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed99cfde-1902-4453-9add-80bcda64e51f" (UID: "ed99cfde-1902-4453-9add-80bcda64e51f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.557719 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.557775 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934055 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerDied","Data":"24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20"} Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934710 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerDied","Data":"999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b"} Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936607 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936676 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:25 crc kubenswrapper[4769]: I0122 13:46:25.471481 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:27 crc kubenswrapper[4769]: I0122 13:46:27.589972 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:27 crc kubenswrapper[4769]: I0122 13:46:27.597141 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:37 crc kubenswrapper[4769]: I0122 13:46:37.807482 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:46:38 crc kubenswrapper[4769]: I0122 13:46:38.111498 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.277872 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.278443 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x86gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lxbp4_openshift-marketplace(7d9e80ce-c46e-4a99-814e-0d9b1b65623f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.280206 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.350446 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.413263 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.413440 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmkrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2ks9m_openshift-marketplace(bc744951-0370-42be-a1c0-e639d8d8cd31): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.414774 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" Jan 22 13:46:40 crc kubenswrapper[4769]: I0122 13:46:40.482151 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:46:40 crc kubenswrapper[4769]: I0122 13:46:40.482223 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.307965 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.381671 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.381862 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjqjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9x475_openshift-marketplace(143027dc-ac6a-442f-bf57-3dcd7efd0427): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.383562 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.385587 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.385697 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkpck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-k2w22_openshift-marketplace(652c2c5a-f885-4bf3-a4f8-73a4717f6a3a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.387266 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.434859 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.434906 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.498092 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.498609 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn7q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j2rz6_openshift-marketplace(9fbf5655-9685-4e15-a6af-41793097be11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.499831 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.543461 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.543611 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx5tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7wh4n_openshift-marketplace(4f403243-0359-478d-a3a6-29a8f0bc29e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.544784 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.561579 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.562613 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm4mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v8jk5_openshift-marketplace(98dd81ac-1a92-4d5a-9e09-bcc49ac33a85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.563905 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.572432 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.572739 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj54v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5rnmz_openshift-marketplace(3b69c283-f109-4f09-9a01-8d21d3764892): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.577027 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" Jan 22 13:46:44 crc kubenswrapper[4769]: I0122 13:46:44.825144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.047657 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"871759f0c2cb1bf835a48fe1c3c45df35d209a15e67a90ded611c851eb461ac2"} Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.048075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"fc128d161cc56dbd9945fc65e631262910146990d95c0102a3359c6af7774ef5"} Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.049185 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056241 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056335 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056342 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989055 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.989326 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989342 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.989364 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989374 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989519 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.990266 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.990856 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.994361 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.997299 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.006116 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.054178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"c9c7117195a6c56a6c7c00d6deb5e9326aa93080a7e4bb2226cdd4bcfe164637"} Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.058234 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.058289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.070291 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cfh49" podStartSLOduration=166.070270777 podStartE2EDuration="2m46.070270777s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:46.066696809 +0000 UTC m=+185.477806758" watchObservedRunningTime="2026-01-22 13:46:46.070270777 +0000 UTC m=+185.481380706" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.159805 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.159918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.160008 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.184537 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.313939 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.717180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:47 crc kubenswrapper[4769]: I0122 13:46:47.061616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerStarted","Data":"54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a"} Jan 22 13:46:48 crc kubenswrapper[4769]: I0122 13:46:48.071688 4769 generic.go:334] "Generic (PLEG): container finished" podID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerID="989e3ac043272fed98dde5e78a5ad367a612ccbb3669b94d0f2d4e845f33992f" exitCode=0 Jan 22 13:46:48 crc kubenswrapper[4769]: I0122 13:46:48.071774 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerDied","Data":"989e3ac043272fed98dde5e78a5ad367a612ccbb3669b94d0f2d4e845f33992f"} Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.297214 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310271 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310499 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2144f5ad-561d-4f3f-bc49-dae55cb0773f" (UID: "2144f5ad-561d-4f3f-bc49-dae55cb0773f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.321818 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2144f5ad-561d-4f3f-bc49-dae55cb0773f" (UID: "2144f5ad-561d-4f3f-bc49-dae55cb0773f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.412174 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.412419 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerDied","Data":"54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a"} Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091452 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a" Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091670 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.588842 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:52 crc kubenswrapper[4769]: E0122 13:46:52.589353 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.589386 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.589627 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.590417 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.596763 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.598403 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.605977 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654366 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755204 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755580 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755714 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.773760 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.907172 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:53 crc kubenswrapper[4769]: I0122 13:46:53.079550 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:53 crc kubenswrapper[4769]: I0122 13:46:53.105730 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerStarted","Data":"cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.117205 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.118758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerStarted","Data":"4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.136382 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.136359031 podStartE2EDuration="3.136359031s" podCreationTimestamp="2026-01-22 13:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:55.133313426 +0000 UTC m=+194.544423365" watchObservedRunningTime="2026-01-22 13:46:55.136359031 +0000 UTC m=+194.547468950" Jan 22 13:46:56 crc kubenswrapper[4769]: I0122 13:46:56.125435 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b" exitCode=0 Jan 22 13:46:56 crc kubenswrapper[4769]: I0122 13:46:56.125512 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.135259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.138329 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97" exitCode=0 Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.138380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.157301 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxbp4" podStartSLOduration=3.95828838 podStartE2EDuration="55.157281101s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.317447085 +0000 UTC m=+144.728557014" lastFinishedPulling="2026-01-22 13:46:56.516439806 +0000 UTC m=+195.927549735" observedRunningTime="2026-01-22 13:46:57.154075753 +0000 UTC m=+196.565185682" watchObservedRunningTime="2026-01-22 13:46:57.157281101 +0000 UTC m=+196.568391030" Jan 22 13:46:58 crc kubenswrapper[4769]: I0122 13:46:58.145660 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerStarted","Data":"2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df"} Jan 22 13:46:58 crc kubenswrapper[4769]: I0122 13:46:58.169421 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2rz6" podStartSLOduration=3.2548638690000002 podStartE2EDuration="54.169401742s" podCreationTimestamp="2026-01-22 13:46:04 +0000 UTC" firstStartedPulling="2026-01-22 13:46:06.607914416 +0000 UTC m=+146.019024345" lastFinishedPulling="2026-01-22 13:46:57.522452289 +0000 UTC m=+196.933562218" observedRunningTime="2026-01-22 13:46:58.165481784 +0000 UTC m=+197.576591713" watchObservedRunningTime="2026-01-22 13:46:58.169401742 +0000 UTC m=+197.580511671" Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.152356 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7" exitCode=0 Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.152443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7"} Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.154432 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b" exitCode=0 Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.154470 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.164039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerStarted","Data":"1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.166067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.168039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerStarted","Data":"d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.187586 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rnmz" podStartSLOduration=3.963075941 podStartE2EDuration="58.187566295s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.34666331 +0000 UTC m=+144.757773239" lastFinishedPulling="2026-01-22 13:46:59.571153664 +0000 UTC m=+198.982263593" observedRunningTime="2026-01-22 13:47:00.184420789 +0000 UTC m=+199.595530718" watchObservedRunningTime="2026-01-22 13:47:00.187566295 +0000 UTC m=+199.598676224" Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.204400 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2w22" podStartSLOduration=3.33248322 podStartE2EDuration="55.204380838s" podCreationTimestamp="2026-01-22 13:46:05 +0000 UTC" firstStartedPulling="2026-01-22 13:46:07.665129217 +0000 UTC m=+147.076239146" lastFinishedPulling="2026-01-22 13:46:59.537026835 +0000 UTC m=+198.948136764" observedRunningTime="2026-01-22 13:47:00.201024216 +0000 UTC m=+199.612134155" watchObservedRunningTime="2026-01-22 13:47:00.204380838 +0000 UTC m=+199.615490767" Jan 22 13:47:01 crc kubenswrapper[4769]: I0122 13:47:01.174707 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0" exitCode=0 Jan 22 13:47:01 crc kubenswrapper[4769]: I0122 13:47:01.174748 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0"} Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.056810 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.057210 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.326432 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.362613 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.387738 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.387813 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.476588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:04 crc kubenswrapper[4769]: I0122 13:47:04.238382 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.088739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.088856 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.155221 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.243207 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.562749 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.138942 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.139318 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.182107 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.206338 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" containerID="cri-o://1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" gracePeriod=2 Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.241496 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:07 crc kubenswrapper[4769]: I0122 13:47:07.361204 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:07 crc kubenswrapper[4769]: I0122 13:47:07.361566 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" containerID="cri-o://2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" gracePeriod=2 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.228701 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" exitCode=0 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.228781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df"} Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.232233 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" exitCode=0 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.232264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8"} Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481847 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481925 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481975 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.482499 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.482643 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" gracePeriod=600 Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.246407 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" exitCode=0 Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.246483 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.964098 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.976268 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088556 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088738 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088956 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089176 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089206 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.090561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities" (OuterVolumeSpecName: "utilities") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.090988 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities" (OuterVolumeSpecName: "utilities") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.095340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6" (OuterVolumeSpecName: "kube-api-access-mn7q6") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "kube-api-access-mn7q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.097307 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v" (OuterVolumeSpecName: "kube-api-access-gj54v") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "kube-api-access-gj54v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.115128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.152562 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192414 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192460 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192472 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192487 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192495 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192505 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.255759 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"95901b43f1b0b192d242724acdf435d55c1a459bc7ffc435091c0491b7b2a77a"} Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.255848 4769 scope.go:117] "RemoveContainer" containerID="1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.256044 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.259913 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559"} Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.259949 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.285819 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.288548 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.295039 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.298588 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.895135 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" path="/var/lib/kubelet/pods/3b69c283-f109-4f09-9a01-8d21d3764892/volumes" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.897541 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbf5655-9685-4e15-a6af-41793097be11" path="/var/lib/kubelet/pods/9fbf5655-9685-4e15-a6af-41793097be11/volumes" Jan 22 13:47:14 crc kubenswrapper[4769]: I0122 13:47:14.728106 4769 scope.go:117] "RemoveContainer" containerID="e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7" Jan 22 13:47:14 crc kubenswrapper[4769]: I0122 13:47:14.812099 4769 scope.go:117] "RemoveContainer" containerID="046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.830146 4769 scope.go:117] "RemoveContainer" containerID="2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.907380 4769 scope.go:117] "RemoveContainer" containerID="2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.958613 4769 scope.go:117] "RemoveContainer" containerID="3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd" Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.289769 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.293275 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.296591 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c" exitCode=0 Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.296667 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.300152 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.302230 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" exitCode=0 Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.302289 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.330973 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ks9m" podStartSLOduration=3.8137846250000003 podStartE2EDuration="1m14.330944426s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.310103194 +0000 UTC m=+144.721213123" lastFinishedPulling="2026-01-22 13:47:15.827262995 +0000 UTC m=+215.238372924" observedRunningTime="2026-01-22 13:47:16.326904924 +0000 UTC m=+215.738014873" watchObservedRunningTime="2026-01-22 13:47:16.330944426 +0000 UTC m=+215.742054365" Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.533232 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.311721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerStarted","Data":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.314286 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" exitCode=0 Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.314371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.316899 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerStarted","Data":"2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.335202 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wh4n" podStartSLOduration=3.905389084 podStartE2EDuration="1m15.335183359s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.340693886 +0000 UTC m=+144.751803815" lastFinishedPulling="2026-01-22 13:47:16.770488161 +0000 UTC m=+216.181598090" observedRunningTime="2026-01-22 13:47:17.331594611 +0000 UTC m=+216.742704540" watchObservedRunningTime="2026-01-22 13:47:17.335183359 +0000 UTC m=+216.746293288" Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.354345 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8jk5" podStartSLOduration=3.156676956 podStartE2EDuration="1m13.354324446s" podCreationTimestamp="2026-01-22 13:46:04 +0000 UTC" firstStartedPulling="2026-01-22 13:46:06.511090272 +0000 UTC m=+145.922200201" lastFinishedPulling="2026-01-22 13:47:16.708737762 +0000 UTC m=+216.119847691" observedRunningTime="2026-01-22 13:47:17.350389298 +0000 UTC m=+216.761499247" watchObservedRunningTime="2026-01-22 13:47:17.354324446 +0000 UTC m=+216.765434375" Jan 22 13:47:18 crc kubenswrapper[4769]: I0122 13:47:18.325218 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} Jan 22 13:47:18 crc kubenswrapper[4769]: I0122 13:47:18.346308 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x475" podStartSLOduration=2.321950052 podStartE2EDuration="1m12.346291752s" podCreationTimestamp="2026-01-22 13:46:06 +0000 UTC" firstStartedPulling="2026-01-22 13:46:07.664390598 +0000 UTC m=+147.075500527" lastFinishedPulling="2026-01-22 13:47:17.688732298 +0000 UTC m=+217.099842227" observedRunningTime="2026-01-22 13:47:18.342658792 +0000 UTC m=+217.753768721" watchObservedRunningTime="2026-01-22 13:47:18.346291752 +0000 UTC m=+217.757401681" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.034744 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.035041 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.088747 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.373693 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.373755 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.397377 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.412769 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.408890 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.805909 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.805995 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.847637 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:25 crc kubenswrapper[4769]: I0122 13:47:25.426839 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.529823 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.530113 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.565274 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:27 crc kubenswrapper[4769]: I0122 13:47:27.425236 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:28 crc kubenswrapper[4769]: I0122 13:47:28.362582 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:47:28 crc kubenswrapper[4769]: I0122 13:47:28.363188 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" containerID="cri-o://0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" gracePeriod=2 Jan 22 13:47:30 crc kubenswrapper[4769]: I0122 13:47:30.569234 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:47:30 crc kubenswrapper[4769]: I0122 13:47:30.569693 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" containerID="cri-o://a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" gracePeriod=2 Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.402737 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" exitCode=0 Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.402849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3"} Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.910475 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.981289 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052205 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052485 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.054068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities" (OuterVolumeSpecName: "utilities") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.054609 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities" (OuterVolumeSpecName: "utilities") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.058500 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf" (OuterVolumeSpecName: "kube-api-access-hjqjf") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "kube-api-access-hjqjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.058538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp" (OuterVolumeSpecName: "kube-api-access-xmkrp") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "kube-api-access-xmkrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.114119 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155025 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155086 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155104 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155118 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155132 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.163945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317555 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317844 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317862 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317883 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317891 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317905 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317914 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317928 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317936 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317947 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317956 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317972 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317982 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318000 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318010 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318019 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318028 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318071 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318080 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318093 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318101 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318114 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318122 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318135 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318143 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318257 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318276 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318289 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318301 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318705 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322520 4769 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322571 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322738 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322758 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322773 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322782 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322818 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322831 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322845 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322855 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322871 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322881 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322898 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322908 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322921 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322931 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323068 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323084 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323099 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323110 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323122 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323133 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.324942 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325104 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325169 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325208 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325252 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340078 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340125 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340141 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340166 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340183 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340203 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340233 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340276 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.353810 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440803 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440829 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440856 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440976 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440997 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441016 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441046 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441070 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441095 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.655337 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"9d4a213a14f5a21b9ecd231875d6aa22cbbfb7d75a58db27a2f98d97feb1dafb"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852595 4769 scope.go:117] "RemoveContainer" containerID="0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852700 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.859017 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860189 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" exitCode=0 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860326 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.917437 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.010598 4769 scope.go:117] "RemoveContainer" containerID="7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.038562 4769 scope.go:117] "RemoveContainer" containerID="acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.052014 4769 scope.go:117] "RemoveContainer" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.078048 4769 scope.go:117] "RemoveContainer" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.102833 4769 scope.go:117] "RemoveContainer" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118196 4769 scope.go:117] "RemoveContainer" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.118578 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": container with ID starting with a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22 not found: ID does not exist" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118611 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} err="failed to get container status \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": rpc error: code = NotFound desc = could not find container \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": container with ID starting with a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22 not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118634 4769 scope.go:117] "RemoveContainer" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.118989 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": container with ID starting with e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9 not found: ID does not exist" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119007 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} err="failed to get container status \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": rpc error: code = NotFound desc = could not find container \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": container with ID starting with e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9 not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119020 4769 scope.go:117] "RemoveContainer" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.119274 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": container with ID starting with 5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb not found: ID does not exist" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119294 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb"} err="failed to get container status \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": rpc error: code = NotFound desc = could not find container \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": container with ID starting with 5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.570213 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.570674 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.571252 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.571842 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.572337 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.572374 4769 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.572673 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.773487 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.867642 4769 generic.go:334] "Generic (PLEG): container finished" podID="98422033-e252-4416-9d6c-9a782f84a615" containerID="4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.867735 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerDied","Data":"4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.869462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.869503 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5558e879799fd2ba6a9fcdb28caf045208b66d263eead1e6875aa65fba01d965"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.871193 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.872200 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.872973 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873015 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873025 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873036 4769 scope.go:117] "RemoveContainer" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873054 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" exitCode=2 Jan 22 13:47:34 crc kubenswrapper[4769]: E0122 13:47:34.174777 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 22 13:47:34 crc kubenswrapper[4769]: I0122 13:47:34.902987 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:34 crc kubenswrapper[4769]: E0122 13:47:34.976097 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.173549 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.179326 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.180051 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290893 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290919 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290945 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock" (OuterVolumeSpecName: "var-lock") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290967 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291043 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291109 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291503 4769 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291517 4769 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291526 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291534 4769 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291543 4769 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.299642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: E0122 13:47:35.339837 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.392736 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913572 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerDied","Data":"cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc"} Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913628 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.916495 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917888 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" exitCode=0 Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917952 4769 scope.go:117] "RemoveContainer" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917966 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.947554 4769 scope.go:117] "RemoveContainer" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.963776 4769 scope.go:117] "RemoveContainer" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.980987 4769 scope.go:117] "RemoveContainer" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.001581 4769 scope.go:117] "RemoveContainer" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.019631 4769 scope.go:117] "RemoveContainer" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041100 4769 scope.go:117] "RemoveContainer" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.041895 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": container with ID starting with d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925 not found: ID does not exist" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041941 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925"} err="failed to get container status \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": rpc error: code = NotFound desc = could not find container \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": container with ID starting with d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041978 4769 scope.go:117] "RemoveContainer" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.042624 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": container with ID starting with 55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c not found: ID does not exist" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.042664 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c"} err="failed to get container status \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": rpc error: code = NotFound desc = could not find container \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": container with ID starting with 55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.042848 4769 scope.go:117] "RemoveContainer" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.043828 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": container with ID starting with 7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda not found: ID does not exist" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.043866 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda"} err="failed to get container status \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": rpc error: code = NotFound desc = could not find container \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": container with ID starting with 7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.043892 4769 scope.go:117] "RemoveContainer" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.044228 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": container with ID starting with 932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45 not found: ID does not exist" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044266 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45"} err="failed to get container status \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": rpc error: code = NotFound desc = could not find container \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": container with ID starting with 932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044294 4769 scope.go:117] "RemoveContainer" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.044627 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": container with ID starting with 3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d not found: ID does not exist" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044649 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d"} err="failed to get container status \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": rpc error: code = NotFound desc = could not find container \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": container with ID starting with 3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044663 4769 scope.go:117] "RemoveContainer" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.045014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": container with ID starting with a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5 not found: ID does not exist" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.045049 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5"} err="failed to get container status \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": rpc error: code = NotFound desc = could not find container \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": container with ID starting with a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.576748 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.891361 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.861513 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862043 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862438 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862758 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.863262 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.863713 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:39 crc kubenswrapper[4769]: E0122 13:47:39.778311 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="6.4s" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.889785 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.890351 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.895727 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.896453 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.557735 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" containerID="cri-o://6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" gracePeriod=15 Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.905926 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.907158 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.907715 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.908317 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.908780 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.909336 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954631 4769 generic.go:334] "Generic (PLEG): container finished" podID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" exitCode=0 Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954683 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954682 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerDied","Data":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerDied","Data":"ecd96351628bb1d50b55482cf0c3518a0cdf7cafe69577c7b0d90695bd293ec5"} Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954776 4769 scope.go:117] "RemoveContainer" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.955310 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.955864 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956213 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956568 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956930 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.979933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980002 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980048 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980232 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981153 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981185 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981236 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981306 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981349 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981378 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981471 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981514 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981549 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981572 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982070 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982261 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982289 4769 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982306 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982314 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.983048 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.986945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988330 4769 scope.go:117] "RemoveContainer" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: E0122 13:47:41.988870 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": container with ID starting with 6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea not found: ID does not exist" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988913 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} err="failed to get container status \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": rpc error: code = NotFound desc = could not find container \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": container with ID starting with 6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea not found: ID does not exist" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990292 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990627 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991223 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk" (OuterVolumeSpecName: "kube-api-access-zrbwk") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "kube-api-access-zrbwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991314 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991841 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083421 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083462 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083472 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083481 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083494 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083503 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083512 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083521 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083530 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083542 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083551 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.277944 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.278641 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279121 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279440 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279898 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:45 crc kubenswrapper[4769]: E0122 13:47:45.340922 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:46 crc kubenswrapper[4769]: E0122 13:47:46.180291 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="7s" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.989946 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990012 4769 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47" exitCode=1 Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990073 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47"} Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990677 4769 scope.go:117] "RemoveContainer" containerID="83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.991161 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.991701 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.992247 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.992635 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.993167 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.993668 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.883258 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.885201 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.886528 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.887166 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.887694 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.888266 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.888917 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.901740 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.901779 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:47 crc kubenswrapper[4769]: E0122 13:47:47.902316 4769 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.903045 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: W0122 13:47:47.932934 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9 WatchSource:0}: Error finding container d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9: Status 404 returned error can't find the container with id d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9 Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.002983 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.003149 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b801659beb601eac2687939f669ac486437e11bf2809863d0f3c82193d625ef"} Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.004467 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005129 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005390 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9"} Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005630 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.006203 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.006893 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.007388 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014380 4769 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dd074ee6f05dfb7f27b8b3cbfe33bc383b045772c3f61ed94ace304313aea8e0" exitCode=0 Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014608 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014659 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014709 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dd074ee6f05dfb7f27b8b3cbfe33bc383b045772c3f61ed94ace304313aea8e0"} Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015141 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: E0122 13:47:49.015309 4769 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015400 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015654 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015954 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.016715 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.017011 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390012 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390145 4769 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390199 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.029960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"06c0e2395c7cf93850d7fa2e4d5ed0de84ec761b207fe82e34e9161f79e1c68c"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e834f92db93d1442490f9e2de8324e3492610d235f634f1be65875b5c941b47b"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030517 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d35f3f93181017ef12da2b8dd39b76770682569c663a72985d77be2eaa6e4b28"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d46abaf6523fc5bbd161058b225fd16feb206c4ef7c1baae949da9a1d15290d"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030541 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ef5b37f4fd5fea97b1f5419e0a1ccd7654e51ee6f955d82ccfce421fceb5aea"} Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.035958 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.035994 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.036032 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.904039 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.904393 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.911658 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:54 crc kubenswrapper[4769]: I0122 13:47:54.089783 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.046017 4769 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.064938 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.065861 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.070355 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.073162 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e85410e-37b5-456b-9cd6-bd0b56e92a98" Jan 22 13:47:57 crc kubenswrapper[4769]: I0122 13:47:57.070921 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:57 crc kubenswrapper[4769]: I0122 13:47:57.070964 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:59 crc kubenswrapper[4769]: I0122 13:47:59.396072 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:59 crc kubenswrapper[4769]: I0122 13:47:59.404914 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:48:00 crc kubenswrapper[4769]: I0122 13:48:00.905144 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e85410e-37b5-456b-9cd6-bd0b56e92a98" Jan 22 13:48:05 crc kubenswrapper[4769]: I0122 13:48:05.514558 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.044106 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.354913 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.583962 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.777709 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.881804 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.054759 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.286557 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.321695 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.371564 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.476916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.576708 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.594384 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.623130 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.744562 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.752202 4769 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.769278 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.188524 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.370820 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.428845 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.581424 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.618500 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.644665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.695982 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.707764 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.210763 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.376483 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.410247 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.422506 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.440937 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.449408 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.520729 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.553048 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.587290 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.732303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.739288 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.745452 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.783336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.796580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.998457 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.052173 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.125999 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.239968 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.257116 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.277317 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.396345 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.408181 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.494681 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.513276 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.562843 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.620008 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.622240 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.671596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.826866 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.921683 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.923687 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.930847 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.931655 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.999725 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.011454 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.052760 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.070665 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.092784 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.132247 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.145036 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.178285 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.253427 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.276497 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.329836 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.343020 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.469113 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.476426 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.476750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.614236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.654596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.676554 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.685913 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.694581 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.728965 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.868277 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.929843 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.936097 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.948364 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.949895 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.007756 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.149389 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.203991 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.218628 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.227001 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.289301 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.306273 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.491734 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.499486 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.519556 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.573662 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.657983 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.698831 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.738511 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.832237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.940919 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.176553 4769 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.215568 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.377904 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.609118 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.650064 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.711559 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.722508 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.756474 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.836132 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.940891 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.010851 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.019905 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.085076 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.146634 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.192587 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.225959 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.227529 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.466171 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.677425 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.682089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.733974 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.783828 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.870927 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.912364 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.929026 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.955197 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.033970 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.077450 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.114783 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.146531 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.166687 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.325941 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.365818 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.419921 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.511450 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.555989 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.556868 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.600454 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.615568 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.634828 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.741409 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.757156 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.783372 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.860422 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.037124 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.067752 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.119777 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.143289 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.152948 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.157386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.185154 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.269390 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.287313 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.293732 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.448677 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.463142 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.483219 4769 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.542105 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.552644 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.621620 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.665600 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.675160 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.746155 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.777701 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.792287 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.893958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.062527 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.183219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.226675 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.237396 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.386089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.392446 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.434746 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.475217 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.484772 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.507170 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.566401 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.577252 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.756837 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.777367 4769 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.777774 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.777761215 podStartE2EDuration="45.777761215s" podCreationTimestamp="2026-01-22 13:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:47:56.001269999 +0000 UTC m=+255.412379948" watchObservedRunningTime="2026-01-22 13:48:17.777761215 +0000 UTC m=+277.188871134" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781262 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m","openshift-authentication/oauth-openshift-558db77b4-jtzpg","openshift-marketplace/redhat-operators-9x475","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781318 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:17 crc kubenswrapper[4769]: E0122 13:48:17.781470 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781481 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: E0122 13:48:17.781492 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781499 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781622 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781637 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781817 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781841 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781984 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.786053 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.786237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787340 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787535 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787583 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787755 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787758 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788112 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788367 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788382 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.791201 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.791375 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.793871 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.799918 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.804888 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.815090 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.838371 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.838353222 podStartE2EDuration="21.838353222s" podCreationTimestamp="2026-01-22 13:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:17.833842718 +0000 UTC m=+277.244952657" watchObservedRunningTime="2026-01-22 13:48:17.838353222 +0000 UTC m=+277.249463151" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.912745 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.934476 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940005 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940145 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940174 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041299 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041444 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041473 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041503 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.042265 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043565 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043723 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.044137 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047107 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047263 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047322 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047770 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.048309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.049409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.063096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.106010 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.194277 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.206004 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.236996 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.245497 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.246238 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.263723 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.275124 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.344546 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.392103 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.427399 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.466201 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.503092 4769 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.543500 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.560887 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.586848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.588051 4769 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.588327 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" gracePeriod=5 Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.737926 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.764497 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.891078 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" path="/var/lib/kubelet/pods/143027dc-ac6a-442f-bf57-3dcd7efd0427/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.892190 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" path="/var/lib/kubelet/pods/bc744951-0370-42be-a1c0-e639d8d8cd31/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.893146 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" path="/var/lib/kubelet/pods/e14c6636-281b-40e1-9ee8-1a08812104fd/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.935308 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.100272 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.101978 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.110674 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.194212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" event={"ID":"d080b88c-ba18-4f18-b1f7-dee04d9c731b","Type":"ContainerStarted","Data":"6ac3b47bfb0905d5ab4a329814698e0d8548b8991480a98f770ced3de9a6fea7"} Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.215978 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.269845 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.292239 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.325833 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.342769 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.451952 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.485926 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.495954 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.531484 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.536303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.540965 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.588750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.589839 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.618727 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.729483 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.819984 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.857959 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.965052 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.996990 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.137837 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.202509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" event={"ID":"d080b88c-ba18-4f18-b1f7-dee04d9c731b","Type":"ContainerStarted","Data":"4ac294b6ce1d87033264d3df3bfee6768956d8cfbcae1f8206e26e33cb2622b5"} Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.202860 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.208251 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.211319 4769 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.241042 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" podStartSLOduration=64.241014556 podStartE2EDuration="1m4.241014556s" podCreationTimestamp="2026-01-22 13:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:20.236027019 +0000 UTC m=+279.647136988" watchObservedRunningTime="2026-01-22 13:48:20.241014556 +0000 UTC m=+279.652124515" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.692657 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.714106 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.954263 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.987759 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.208308 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.401528 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.484053 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.636519 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.175366 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.175838 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233616 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233678 4769 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" exitCode=137 Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233731 4769 scope.go:117] "RemoveContainer" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233785 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236468 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236535 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236597 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236615 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236632 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236624 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236655 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237071 4769 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237085 4769 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237094 4769 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237101 4769 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.244440 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.254418 4769 scope.go:117] "RemoveContainer" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: E0122 13:48:24.254951 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": container with ID starting with 994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce not found: ID does not exist" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.254989 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce"} err="failed to get container status \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": rpc error: code = NotFound desc = could not find container \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": container with ID starting with 994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce not found: ID does not exist" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.338450 4769 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.891211 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.891592 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.904089 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.904164 4769 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="611b7afc-b813-48f7-80c8-7cec2c2a5711" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.908580 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.908625 4769 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="611b7afc-b813-48f7-80c8-7cec2c2a5711" Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.302005 4769 generic.go:334] "Generic (PLEG): container finished" podID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" exitCode=0 Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.302126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90"} Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.303169 4769 scope.go:117] "RemoveContainer" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.310123 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4"} Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.311840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.313220 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:48:38 crc kubenswrapper[4769]: I0122 13:48:38.493401 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 13:48:40 crc kubenswrapper[4769]: I0122 13:48:40.742843 4769 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.218737 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.423861 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.424123 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" containerID="cri-o://ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" gracePeriod=30 Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.523763 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.524056 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" containerID="cri-o://2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" gracePeriod=30 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.354674 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356324 4769 generic.go:334] "Generic (PLEG): container finished" podID="88755d81-da75-40b3-97c4-224eaad0eca2" containerID="2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" exitCode=0 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerDied","Data":"2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerDied","Data":"8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356889 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.363990 4769 generic.go:334] "Generic (PLEG): container finished" podID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" exitCode=0 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerDied","Data":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364052 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerDied","Data":"99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364074 4769 scope.go:117] "RemoveContainer" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364073 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.365598 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.384432 4769 scope.go:117] "RemoveContainer" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.384831 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": container with ID starting with ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b not found: ID does not exist" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.384870 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} err="failed to get container status \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": rpc error: code = NotFound desc = could not find container \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": container with ID starting with ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b not found: ID does not exist" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.450307 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.450644 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451035 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.453388 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config" (OuterVolumeSpecName: "config") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.456537 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459651 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459703 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459717 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459728 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.475107 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g" (OuterVolumeSpecName: "kube-api-access-sln4g") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "kube-api-access-sln4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560197 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560279 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560301 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560335 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560532 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.561285 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca" (OuterVolumeSpecName: "client-ca") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.561353 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config" (OuterVolumeSpecName: "config") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.568812 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.569981 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc" (OuterVolumeSpecName: "kube-api-access-qxfjc") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "kube-api-access-qxfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.657965 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658215 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658237 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658250 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658257 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658273 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658282 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658406 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658425 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658438 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661548 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661609 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661624 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661637 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.666648 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.667442 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.675498 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.682168 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.711911 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.715688 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.748768 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.749207 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-xjtqx proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" podUID="74671cae-8e7e-40b3-8137-2b54a4032b26" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.755320 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.755766 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-9mgb2 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" podUID="81a6e8a2-199d-482d-98bc-0f2f16383d4e" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763194 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763304 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763467 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864859 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864888 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864947 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864966 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.865740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.866050 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.866205 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.867016 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.867516 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.872595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.876344 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.879873 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.884164 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.888987 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" path="/var/lib/kubelet/pods/2b0fa7ff-24c4-431c-bc35-87f9483d5c70/volumes" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371633 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.381633 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.388359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.400096 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.404409 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573359 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573429 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573590 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573620 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573649 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573687 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574593 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca" (OuterVolumeSpecName: "client-ca") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574597 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca" (OuterVolumeSpecName: "client-ca") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574853 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config" (OuterVolumeSpecName: "config") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.575308 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config" (OuterVolumeSpecName: "config") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.578972 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.580342 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2" (OuterVolumeSpecName: "kube-api-access-9mgb2") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "kube-api-access-9mgb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.580887 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.581157 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx" (OuterVolumeSpecName: "kube-api-access-xjtqx") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "kube-api-access-xjtqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675214 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675247 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675255 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675264 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675275 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675285 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675297 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675309 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675320 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.930653 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.379218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.379295 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.426865 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.427815 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429709 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429737 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429757 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.430082 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.430313 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432361 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432691 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432505 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.445180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.467853 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.479862 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588384 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588421 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689198 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.690460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.690946 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.696396 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.715506 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.740371 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.891109 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74671cae-8e7e-40b3-8137-2b54a4032b26" path="/var/lib/kubelet/pods/74671cae-8e7e-40b3-8137-2b54a4032b26/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.891874 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a6e8a2-199d-482d-98bc-0f2f16383d4e" path="/var/lib/kubelet/pods/81a6e8a2-199d-482d-98bc-0f2f16383d4e/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.892226 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" path="/var/lib/kubelet/pods/88755d81-da75-40b3-97c4-224eaad0eca2/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.922054 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.990512 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerStarted","Data":"3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3"} Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385605 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerStarted","Data":"6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a"} Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.389995 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.402173 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" podStartSLOduration=3.402153341 podStartE2EDuration="3.402153341s" podCreationTimestamp="2026-01-22 13:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:45.399811396 +0000 UTC m=+304.810921345" watchObservedRunningTime="2026-01-22 13:48:45.402153341 +0000 UTC m=+304.813263270" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.416893 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.417847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421637 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421769 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421643 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.422557 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.422827 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.426986 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.435356 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.435736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521319 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521412 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521480 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623596 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623670 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623696 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623748 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.625636 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.625642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.626572 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.631165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.642211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.737553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.924919 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: W0122 13:48:47.947015 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016c4fa8_4f5f_4864_bd36_07b09ce79d08.slice/crio-c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046 WatchSource:0}: Error finding container c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046: Status 404 returned error can't find the container with id c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046 Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402423 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerStarted","Data":"ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b"} Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerStarted","Data":"c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046"} Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402816 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.408366 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.452187 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" podStartSLOduration=6.452164198 podStartE2EDuration="6.452164198s" podCreationTimestamp="2026-01-22 13:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:48.427335285 +0000 UTC m=+307.838445214" watchObservedRunningTime="2026-01-22 13:48:48.452164198 +0000 UTC m=+307.863274127" Jan 22 13:48:51 crc kubenswrapper[4769]: I0122 13:48:51.649394 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 13:48:51 crc kubenswrapper[4769]: I0122 13:48:51.659485 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 13:48:58 crc kubenswrapper[4769]: I0122 13:48:58.541197 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 13:49:01 crc kubenswrapper[4769]: I0122 13:49:01.700960 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:01 crc kubenswrapper[4769]: I0122 13:49:01.702420 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" containerID="cri-o://ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" gracePeriod=30 Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.490972 4769 generic.go:334] "Generic (PLEG): container finished" podID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerID="ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" exitCode=0 Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.491212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerDied","Data":"ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b"} Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.716581 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816561 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816813 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816870 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816965 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817569 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca" (OuterVolumeSpecName: "client-ca") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817598 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config" (OuterVolumeSpecName: "config") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.822910 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.822927 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4" (OuterVolumeSpecName: "kube-api-access-4fmq4") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "kube-api-access-4fmq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918412 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918457 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918472 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918484 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918495 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.430408 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:03 crc kubenswrapper[4769]: E0122 13:49:03.430900 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.430913 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.431001 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.431355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.440457 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerDied","Data":"c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046"} Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498475 4769 scope.go:117] "RemoveContainer" containerID="ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498522 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.523139 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526568 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526785 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.529966 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628673 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628735 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628804 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630357 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630476 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.633192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.649532 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.744949 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.124126 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" event={"ID":"7e370c3a-a358-4548-bb11-7780ee6ef6b8","Type":"ContainerStarted","Data":"8ea0cad14a4a41f18b0d4d0852fd4e923c49a749d882170f2419c11a8b351992"} Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" event={"ID":"7e370c3a-a358-4548-bb11-7780ee6ef6b8","Type":"ContainerStarted","Data":"28f72b117ed18a5edb4a3d77a06e43c8efcb869efe58ee963c246653f12abbc1"} Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.513329 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.526834 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" podStartSLOduration=3.526814727 podStartE2EDuration="3.526814727s" podCreationTimestamp="2026-01-22 13:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:04.521640635 +0000 UTC m=+323.932750574" watchObservedRunningTime="2026-01-22 13:49:04.526814727 +0000 UTC m=+323.937924676" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.889548 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" path="/var/lib/kubelet/pods/016c4fa8-4f5f-4864-bd36-07b09ce79d08/volumes" Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.963719 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.964589 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" containerID="cri-o://b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" gracePeriod=30 Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.978174 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.978448 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" containerID="cri-o://40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" gracePeriod=30 Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.996531 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.997256 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" containerID="cri-o://e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.002263 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.002539 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" containerID="cri-o://2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.005676 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.005945 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" containerID="cri-o://d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.009419 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.010440 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.026736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115431 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115531 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.218940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.232505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.233958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.327113 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.496356 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543036 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543165 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"b542c5dbcb707bb656b636afb6aa1bcc3a67f0090bf88281e297bd475aa9bd3f"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543190 4769 scope.go:117] "RemoveContainer" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543350 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.547391 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.547566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.550262 4769 generic.go:334] "Generic (PLEG): container finished" podID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerID="e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.550390 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.553963 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.554035 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.556293 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.556330 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.577072 4769 scope.go:117] "RemoveContainer" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.612125 4769 scope.go:117] "RemoveContainer" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626236 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.634862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities" (OuterVolumeSpecName: "utilities") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.637911 4769 scope.go:117] "RemoveContainer" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.638358 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc" (OuterVolumeSpecName: "kube-api-access-xx5tc") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "kube-api-access-xx5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.639301 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": container with ID starting with b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259 not found: ID does not exist" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.639361 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} err="failed to get container status \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": rpc error: code = NotFound desc = could not find container \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": container with ID starting with b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259 not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.639392 4769 scope.go:117] "RemoveContainer" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.640334 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": container with ID starting with c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6 not found: ID does not exist" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.640358 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6"} err="failed to get container status \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": rpc error: code = NotFound desc = could not find container \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": container with ID starting with c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6 not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.640376 4769 scope.go:117] "RemoveContainer" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.641053 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": container with ID starting with 4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd not found: ID does not exist" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.641086 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd"} err="failed to get container status \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": rpc error: code = NotFound desc = could not find container \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": container with ID starting with 4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.641099 4769 scope.go:117] "RemoveContainer" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.694756 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727248 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727269 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727281 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.728972 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.735041 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.745644 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.748770 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.776103 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: W0122 13:49:10.782509 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfacd8e_cbec_4f68_b90c_ede3a679e454.slice/crio-028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971 WatchSource:0}: Error finding container 028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971: Status 404 returned error can't find the container with id 028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829008 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829131 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829235 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.830565 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities" (OuterVolumeSpecName: "utilities") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.833013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck" (OuterVolumeSpecName: "kube-api-access-qkpck") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "kube-api-access-qkpck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.876955 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.881629 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.893189 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" path="/var/lib/kubelet/pods/4f403243-0359-478d-a3a6-29a8f0bc29e2/volumes" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930678 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930704 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930766 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930825 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930845 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930870 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930893 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931105 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931117 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931729 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities" (OuterVolumeSpecName: "utilities") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.933293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities" (OuterVolumeSpecName: "utilities") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.933647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.936133 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.938103 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf" (OuterVolumeSpecName: "kube-api-access-x86gf") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "kube-api-access-x86gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.944570 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw" (OuterVolumeSpecName: "kube-api-access-dm4mw") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "kube-api-access-dm4mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.945058 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq" (OuterVolumeSpecName: "kube-api-access-vxdbq") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "kube-api-access-vxdbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.962364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.981527 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.997729 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032089 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032151 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032169 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032181 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032194 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032207 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032218 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032231 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032242 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032252 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563812 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"c437a788f729ec1c74235c0c86ed4e15424a790ae709346c3620566dfd2a5bb2"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563884 4769 scope.go:117] "RemoveContainer" containerID="e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563898 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.570454 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"6e66e2dbf8bc8a080c55b13a7260516fe1212a4c0154bcf230d5878c8ebeeeed"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.570482 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.577204 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.577383 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.582150 4769 scope.go:117] "RemoveContainer" containerID="2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.591498 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.592018 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"ab73ea8d8d9a566fef3480c2969fb2296deb50f4ddfdc8ecead203c9dda4e719"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" event={"ID":"1cfacd8e-cbec-4f68-b90c-ede3a679e454","Type":"ContainerStarted","Data":"6d0480232009b5f6edcca36dcb41700dfaa70a49bb5305e36bb6a17d2e374b50"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595503 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" event={"ID":"1cfacd8e-cbec-4f68-b90c-ede3a679e454","Type":"ContainerStarted","Data":"028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595847 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.596976 4769 scope.go:117] "RemoveContainer" containerID="19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.599388 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.612515 4769 scope.go:117] "RemoveContainer" containerID="bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.623270 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" podStartSLOduration=2.623250703 podStartE2EDuration="2.623250703s" podCreationTimestamp="2026-01-22 13:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:11.618742882 +0000 UTC m=+331.029852811" watchObservedRunningTime="2026-01-22 13:49:11.623250703 +0000 UTC m=+331.034360632" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.637289 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.637340 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.642929 4769 scope.go:117] "RemoveContainer" containerID="40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.673549 4769 scope.go:117] "RemoveContainer" containerID="0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.674233 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.679996 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.693899 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.696776 4769 scope.go:117] "RemoveContainer" containerID="f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.701519 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.706697 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.710941 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.711105 4769 scope.go:117] "RemoveContainer" containerID="d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.728703 4769 scope.go:117] "RemoveContainer" containerID="fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.743441 4769 scope.go:117] "RemoveContainer" containerID="5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.889760 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" path="/var/lib/kubelet/pods/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.890772 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" path="/var/lib/kubelet/pods/7d9e80ce-c46e-4a99-814e-0d9b1b65623f/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.891467 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" path="/var/lib/kubelet/pods/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.892558 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" path="/var/lib/kubelet/pods/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.976667 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979078 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979116 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979142 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979156 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979174 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979187 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979207 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979219 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979234 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979247 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979267 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979281 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979299 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979311 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979327 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979339 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979356 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979368 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979382 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979395 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979413 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979427 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979446 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979457 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979474 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979486 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979499 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979510 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979691 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979713 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979728 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979749 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979766 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.980095 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.981079 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.985333 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.986219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158565 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158623 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158703 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.259982 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260131 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260628 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.276399 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.299106 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.573518 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.575234 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.576828 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.585145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667263 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667331 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.683913 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.768891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.768957 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.769019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.769493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.770665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.789745 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.941880 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.329666 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:14 crc kubenswrapper[4769]: W0122 13:49:14.358950 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88e1938_2f4c_43c7_9af2_98fb7222cee2.slice/crio-7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b WatchSource:0}: Error finding container 7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b: Status 404 returned error can't find the container with id 7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625109 4769 generic.go:334] "Generic (PLEG): container finished" podID="c5db9abf-deb2-494a-b618-7180fbf1e53e" containerID="49753e10ea9e80b5b06c95d93825b264bdbd4245c3df1979127d3c6411fe8943" exitCode=0 Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerDied","Data":"49753e10ea9e80b5b06c95d93825b264bdbd4245c3df1979127d3c6411fe8943"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerStarted","Data":"4bd9bfec0be5434224f4e0d8160cdb43c11490454a6a97a2c42832fc0f091f60"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.629448 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.629487 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b"} Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.372887 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.374268 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.376196 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.389897 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.487827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.488651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.488848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.590805 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592449 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592870 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.593324 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.616000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.636562 4769 generic.go:334] "Generic (PLEG): container finished" podID="d88e1938-2f4c-43c7-9af2-98fb7222cee2" containerID="0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2" exitCode=0 Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.636747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerDied","Data":"0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2"} Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.692369 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.978098 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.979592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.982554 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.988342 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.077581 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:16 crc kubenswrapper[4769]: W0122 13:49:16.087220 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bbcc4b3_c280_4093_9419_7d94204256fe.slice/crio-00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf WatchSource:0}: Error finding container 00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf: Status 404 returned error can't find the container with id 00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097418 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097437 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.199220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.199274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.216356 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.297067 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.643939 4769 generic.go:334] "Generic (PLEG): container finished" podID="6bbcc4b3-c280-4093-9419-7d94204256fe" containerID="a7d3f114d84fdd1b7fc8a96a58d1e8a6cab446d40790a667348247eb14db6048" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.644061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerDied","Data":"a7d3f114d84fdd1b7fc8a96a58d1e8a6cab446d40790a667348247eb14db6048"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.644912 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerStarted","Data":"00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.648967 4769 generic.go:334] "Generic (PLEG): container finished" podID="d88e1938-2f4c-43c7-9af2-98fb7222cee2" containerID="f66755819f7254a689cbeefb6e794f94d5894872bff4f9c5b200a02dd002c683" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.649053 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerDied","Data":"f66755819f7254a689cbeefb6e794f94d5894872bff4f9c5b200a02dd002c683"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.653993 4769 generic.go:334] "Generic (PLEG): container finished" podID="c5db9abf-deb2-494a-b618-7180fbf1e53e" containerID="46c2d1490c2b3d837113558d5cc2951704a2c1cc8261955a692b3e63f7cd3d1b" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.654036 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerDied","Data":"46c2d1490c2b3d837113558d5cc2951704a2c1cc8261955a692b3e63f7cd3d1b"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.682391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:16 crc kubenswrapper[4769]: W0122 13:49:16.693333 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9b79f2_127c_4533_a170_8cb16e845c18.slice/crio-79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1 WatchSource:0}: Error finding container 79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1: Status 404 returned error can't find the container with id 79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1 Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.667773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"0a3d25e60aeabb9720241aea7707a518021511464c51ba7e6020079946a70675"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671675 4769 generic.go:334] "Generic (PLEG): container finished" podID="5b9b79f2-127c-4533-a170-8cb16e845c18" containerID="bfea64a322374f9fefb725dd0c996f81ee60b921f2c788b5f620e9e7d4d9118e" exitCode=0 Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerDied","Data":"bfea64a322374f9fefb725dd0c996f81ee60b921f2c788b5f620e9e7d4d9118e"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.690401 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twpxx" podStartSLOduration=2.118769379 podStartE2EDuration="4.690381295s" podCreationTimestamp="2026-01-22 13:49:13 +0000 UTC" firstStartedPulling="2026-01-22 13:49:14.631216912 +0000 UTC m=+334.042326841" lastFinishedPulling="2026-01-22 13:49:17.202828828 +0000 UTC m=+336.613938757" observedRunningTime="2026-01-22 13:49:17.685410451 +0000 UTC m=+337.096520380" watchObservedRunningTime="2026-01-22 13:49:17.690381295 +0000 UTC m=+337.101491234" Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.678411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.681087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerStarted","Data":"40d697b4c769615858c7997f36004ed5a22a9f890686a7a882dfd468a26735dd"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.684939 4769 generic.go:334] "Generic (PLEG): container finished" podID="6bbcc4b3-c280-4093-9419-7d94204256fe" containerID="2e8cfc5abcfaebbc01e5c63a4c33838ac6db3f9d9a0ddc3d517cfd24231e91e3" exitCode=0 Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.685586 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerDied","Data":"2e8cfc5abcfaebbc01e5c63a4c33838ac6db3f9d9a0ddc3d517cfd24231e91e3"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.746099 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtrsx" podStartSLOduration=3.768730363 podStartE2EDuration="6.746083235s" podCreationTimestamp="2026-01-22 13:49:12 +0000 UTC" firstStartedPulling="2026-01-22 13:49:14.626540237 +0000 UTC m=+334.037650166" lastFinishedPulling="2026-01-22 13:49:17.603893109 +0000 UTC m=+337.015003038" observedRunningTime="2026-01-22 13:49:18.74348063 +0000 UTC m=+338.154590559" watchObservedRunningTime="2026-01-22 13:49:18.746083235 +0000 UTC m=+338.157193164" Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.692222 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerStarted","Data":"ff12fb3e73ec2e549026400bc60ec25a5648bdb0ec104c5a57d93279a25a96d9"} Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.694759 4769 generic.go:334] "Generic (PLEG): container finished" podID="5b9b79f2-127c-4533-a170-8cb16e845c18" containerID="30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192" exitCode=0 Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.694831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerDied","Data":"30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192"} Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.713564 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vlvj" podStartSLOduration=2.231917061 podStartE2EDuration="4.713548148s" podCreationTimestamp="2026-01-22 13:49:15 +0000 UTC" firstStartedPulling="2026-01-22 13:49:16.648054723 +0000 UTC m=+336.059164642" lastFinishedPulling="2026-01-22 13:49:19.12968577 +0000 UTC m=+338.540795729" observedRunningTime="2026-01-22 13:49:19.711750997 +0000 UTC m=+339.122860946" watchObservedRunningTime="2026-01-22 13:49:19.713548148 +0000 UTC m=+339.124658077" Jan 22 13:49:21 crc kubenswrapper[4769]: I0122 13:49:21.707940 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"98eaddfcc73d3f67c6032f990f6435d2df30450e46ad2bda1c74b7fecd91fd0d"} Jan 22 13:49:21 crc kubenswrapper[4769]: I0122 13:49:21.726188 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8nrlf" podStartSLOduration=4.304054694 podStartE2EDuration="6.726170367s" podCreationTimestamp="2026-01-22 13:49:15 +0000 UTC" firstStartedPulling="2026-01-22 13:49:17.673107935 +0000 UTC m=+337.084217864" lastFinishedPulling="2026-01-22 13:49:20.095223608 +0000 UTC m=+339.506333537" observedRunningTime="2026-01-22 13:49:21.724653283 +0000 UTC m=+341.135763232" watchObservedRunningTime="2026-01-22 13:49:21.726170367 +0000 UTC m=+341.137280296" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.299823 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.300162 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.344695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.751611 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.942202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.942535 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.977715 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:24 crc kubenswrapper[4769]: I0122 13:49:24.757706 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.692653 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.692724 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.735581 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.772934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.297415 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.298650 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.338058 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.770588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.780942 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.782035 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.798814 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931685 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931741 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931844 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931878 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931930 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931951 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.971257 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033615 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033894 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.034215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.035278 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.035863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.042223 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.046784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.051674 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.052401 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.101483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.559087 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:35 crc kubenswrapper[4769]: W0122 13:49:35.567002 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0556840e_70ca_40ac_810a_11b1ddec78d9.slice/crio-894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed WatchSource:0}: Error finding container 894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed: Status 404 returned error can't find the container with id 894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.777336 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" event={"ID":"0556840e-70ca-40ac-810a-11b1ddec78d9","Type":"ContainerStarted","Data":"894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed"} Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.794272 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" event={"ID":"0556840e-70ca-40ac-810a-11b1ddec78d9","Type":"ContainerStarted","Data":"209ed7fbd942a144fd1ffafb5b0573b972f48af0d30d8d2d354eb55cc37b9920"} Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.794587 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.813824 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" podStartSLOduration=4.813806501 podStartE2EDuration="4.813806501s" podCreationTimestamp="2026-01-22 13:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:38.810970319 +0000 UTC m=+358.222080248" watchObservedRunningTime="2026-01-22 13:49:38.813806501 +0000 UTC m=+358.224916430" Jan 22 13:49:40 crc kubenswrapper[4769]: I0122 13:49:40.481934 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:49:40 crc kubenswrapper[4769]: I0122 13:49:40.482013 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.442448 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.443335 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" containerID="cri-o://3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" gracePeriod=30 Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811469 4769 generic.go:334] "Generic (PLEG): container finished" podID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerID="3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" exitCode=0 Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811584 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerDied","Data":"3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3"} Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerDied","Data":"6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a"} Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811779 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.832203 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923261 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923384 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923416 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923448 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.924389 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.924427 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config" (OuterVolumeSpecName: "config") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.928418 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx" (OuterVolumeSpecName: "kube-api-access-5mbhx") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "kube-api-access-5mbhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.928638 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025383 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025424 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025433 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025445 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.815232 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.840302 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.843277 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.891552 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" path="/var/lib/kubelet/pods/bf9268f0-d3a5-470c-b734-a25b11ebb088/volumes" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.455595 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:43 crc kubenswrapper[4769]: E0122 13:49:43.455907 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.455931 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.456060 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.456502 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460279 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460288 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460340 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460354 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460472 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460505 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.465747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.544500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.544809 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.545001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.545132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646679 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.647912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.648871 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.653088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.664193 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.772711 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.181822 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:44 crc kubenswrapper[4769]: W0122 13:49:44.187809 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0624b060_2bdf_4498_9a39_3c13923de378.slice/crio-2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67 WatchSource:0}: Error finding container 2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67: Status 404 returned error can't find the container with id 2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67 Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.829920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" event={"ID":"0624b060-2bdf-4498-9a39-3c13923de378","Type":"ContainerStarted","Data":"843cbe9217f2b579d9535d27280ed4c9dcec2cc2f1248156f49c49a28bfccfb8"} Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.829988 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" event={"ID":"0624b060-2bdf-4498-9a39-3c13923de378","Type":"ContainerStarted","Data":"2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67"} Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.830530 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.852914 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" podStartSLOduration=3.8528934919999998 podStartE2EDuration="3.852893492s" podCreationTimestamp="2026-01-22 13:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:44.847207427 +0000 UTC m=+364.258317406" watchObservedRunningTime="2026-01-22 13:49:44.852893492 +0000 UTC m=+364.264003441" Jan 22 13:49:45 crc kubenswrapper[4769]: I0122 13:49:45.010739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:55 crc kubenswrapper[4769]: I0122 13:49:55.111771 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:55 crc kubenswrapper[4769]: I0122 13:49:55.173022 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:10 crc kubenswrapper[4769]: I0122 13:50:10.481691 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:50:10 crc kubenswrapper[4769]: I0122 13:50:10.482300 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.222404 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" containerID="cri-o://bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" gracePeriod=30 Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.578436 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682351 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682434 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682496 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682659 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683013 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683830 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683858 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684092 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684316 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684329 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.691469 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.691903 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn" (OuterVolumeSpecName: "kube-api-access-vg9rn") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "kube-api-access-vg9rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.692674 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.692921 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.695781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.702305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785281 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785325 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785339 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785356 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785371 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019817 4769 generic.go:334] "Generic (PLEG): container finished" podID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" exitCode=0 Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019853 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerDied","Data":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerDied","Data":"65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a"} Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019894 4769 scope.go:117] "RemoveContainer" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019993 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.039255 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.043431 4769 scope.go:117] "RemoveContainer" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.044726 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:21 crc kubenswrapper[4769]: E0122 13:50:21.044751 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": container with ID starting with bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484 not found: ID does not exist" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.044848 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} err="failed to get container status \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": rpc error: code = NotFound desc = could not find container \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": container with ID starting with bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484 not found: ID does not exist" Jan 22 13:50:22 crc kubenswrapper[4769]: I0122 13:50:22.891752 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" path="/var/lib/kubelet/pods/75dcccce-425a-46ab-bfeb-dc5a0ee835d4/volumes" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.482465 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.482972 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483058 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483555 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483598 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" gracePeriod=600 Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131297 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" exitCode=0 Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131392 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131878 4769 scope.go:117] "RemoveContainer" containerID="9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" Jan 22 13:52:40 crc kubenswrapper[4769]: I0122 13:52:40.481731 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:52:40 crc kubenswrapper[4769]: I0122 13:52:40.483506 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:52:41 crc kubenswrapper[4769]: I0122 13:52:41.054237 4769 scope.go:117] "RemoveContainer" containerID="2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" Jan 22 13:53:10 crc kubenswrapper[4769]: I0122 13:53:10.482586 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:53:10 crc kubenswrapper[4769]: I0122 13:53:10.483209 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.482300 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.482899 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483079 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483740 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483863 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" gracePeriod=600 Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375079 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" exitCode=0 Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375906 4769 scope.go:117] "RemoveContainer" containerID="bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.488860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:36 crc kubenswrapper[4769]: E0122 13:54:36.489536 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.489548 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.489649 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.490053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.493029 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.493228 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.499029 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-shtxc" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.499108 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.503918 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.507176 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4dgt9" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.511617 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.534631 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.538914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.539535 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.542179 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tlbpw" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.549975 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.577840 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.577963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678665 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678696 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.698577 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.698670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.779345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.798145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.841433 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.857834 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.866101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.058917 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.076505 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.093478 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:37 crc kubenswrapper[4769]: W0122 13:54:37.097771 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a1ec89_c852_4274_b95b_c070b9cf8c20.slice/crio-e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315 WatchSource:0}: Error finding container e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315: Status 404 returned error can't find the container with id e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315 Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.132002 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:37 crc kubenswrapper[4769]: W0122 13:54:37.135115 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0390ceac_8902_475a_b739_ddc13392f828.slice/crio-d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8 WatchSource:0}: Error finding container d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8: Status 404 returned error can't find the container with id d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8 Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.687195 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vn9qf" event={"ID":"0390ceac-8902-475a-b739-ddc13392f828","Type":"ContainerStarted","Data":"d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8"} Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.688750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" event={"ID":"e3a1ec89-c852-4274-b95b-c070b9cf8c20","Type":"ContainerStarted","Data":"e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315"} Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.689761 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" event={"ID":"2bdf39e4-511e-4d06-a19a-7aa0cda68e94","Type":"ContainerStarted","Data":"80474f87f16d034b976b0a7d6850685afd199dca4250009888c5348f6b819510"} Jan 22 13:54:40 crc kubenswrapper[4769]: I0122 13:54:40.710084 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vn9qf" event={"ID":"0390ceac-8902-475a-b739-ddc13392f828","Type":"ContainerStarted","Data":"bf4502bda093bf1c79d6ac2be6d5c6ef1715f46fb8ee6d50bfb3a3dff015df65"} Jan 22 13:54:40 crc kubenswrapper[4769]: I0122 13:54:40.732328 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vn9qf" podStartSLOduration=1.991057149 podStartE2EDuration="4.732304426s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.138398792 +0000 UTC m=+656.549508721" lastFinishedPulling="2026-01-22 13:54:39.879646069 +0000 UTC m=+659.290755998" observedRunningTime="2026-01-22 13:54:40.723060013 +0000 UTC m=+660.134169962" watchObservedRunningTime="2026-01-22 13:54:40.732304426 +0000 UTC m=+660.143414355" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.719289 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" event={"ID":"e3a1ec89-c852-4274-b95b-c070b9cf8c20","Type":"ContainerStarted","Data":"cd57fd84c5caacb814ca56519a37f9ee73e612e7657236a80acee23f6147eb1d"} Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.719874 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.722734 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" event={"ID":"2bdf39e4-511e-4d06-a19a-7aa0cda68e94","Type":"ContainerStarted","Data":"8317071a82211f0e5aacdba958f3bbab1b6b1b216e23a1b333561f916cd25a85"} Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.745193 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" podStartSLOduration=1.685460223 podStartE2EDuration="5.745169446s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.101178989 +0000 UTC m=+656.512288918" lastFinishedPulling="2026-01-22 13:54:41.160888212 +0000 UTC m=+660.571998141" observedRunningTime="2026-01-22 13:54:41.739190131 +0000 UTC m=+661.150300100" watchObservedRunningTime="2026-01-22 13:54:41.745169446 +0000 UTC m=+661.156279415" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.763230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" podStartSLOduration=1.685765062 podStartE2EDuration="5.763196281s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.076226144 +0000 UTC m=+656.487336073" lastFinishedPulling="2026-01-22 13:54:41.153657363 +0000 UTC m=+660.564767292" observedRunningTime="2026-01-22 13:54:41.754664916 +0000 UTC m=+661.165774845" watchObservedRunningTime="2026-01-22 13:54:41.763196281 +0000 UTC m=+661.174306250" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.363538 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364428 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" containerID="cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364443 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364484 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" containerID="cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364553 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" containerID="cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364611 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" containerID="cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364528 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" containerID="cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364384 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" containerID="cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.423445 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" containerID="cri-o://d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.656713 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.659175 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-acl-logging/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.659907 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-controller/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.660340 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716521 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fg2hx"] Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716770 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716817 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716852 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716862 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716871 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716880 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716893 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kubecfg-setup" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716902 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kubecfg-setup" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716914 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716922 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716934 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716952 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716959 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716969 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716977 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716987 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716994 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717005 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717013 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717023 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717031 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717044 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717051 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717154 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717168 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717181 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717190 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717198 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717206 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717214 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717225 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717235 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717243 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717372 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717381 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717530 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717737 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.719891 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.754636 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755165 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755215 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" exitCode=2 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755453 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.756094 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.756417 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.758061 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763077 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-acl-logging/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763507 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-controller/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763855 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763871 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763879 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763886 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763893 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763901 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763908 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" exitCode=143 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763916 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" exitCode=143 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763934 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763956 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763968 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764081 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764108 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764118 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764125 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764130 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764135 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764141 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764145 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764150 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764155 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764159 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764175 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764181 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764186 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764191 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764197 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764203 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764208 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764213 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764217 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764222 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764236 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764243 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764249 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764255 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764261 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764268 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764274 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764281 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764287 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764293 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764302 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"e2d3c55e05f15106417cacacd13bd2ff48a7d39f5b85eb5a6e946e2cf2413457"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764311 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764322 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764332 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764342 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764348 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764355 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764361 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764367 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764373 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764379 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.788032 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.803379 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.819819 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820500 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820629 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820623 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820679 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820827 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820849 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820997 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821110 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821187 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821407 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821651 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821491 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821739 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821740 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket" (OuterVolumeSpecName: "log-socket") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822018 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822090 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822133 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822251 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log" (OuterVolumeSpecName: "node-log") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822342 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822387 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822453 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash" (OuterVolumeSpecName: "host-slash") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823035 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823115 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823367 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823509 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823807 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823843 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824053 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824130 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824201 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824297 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824352 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824467 4769 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824487 4769 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824500 4769 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824514 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824529 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824542 4769 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824601 4769 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824614 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824627 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824640 4769 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824653 4769 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824666 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824678 4769 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824691 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824703 4769 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824715 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824727 4769 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.828225 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.828352 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w" (OuterVolumeSpecName: "kube-api-access-p276w") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "kube-api-access-p276w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.835017 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.835916 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.852845 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.869340 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.869347 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.891491 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.912219 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.925995 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926084 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926109 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926182 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926287 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926382 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926415 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926485 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926505 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926535 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926553 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926609 4769 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926619 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926632 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927359 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927399 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927435 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927465 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927492 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927529 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927621 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927624 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927683 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927707 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928483 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928887 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.932904 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.942971 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.948379 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.962933 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.963343 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963401 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963431 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.963865 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963899 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963921 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.964369 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964461 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964479 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.964726 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964754 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964770 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965075 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965097 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965109 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965364 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965424 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965461 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965831 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965863 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965884 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966137 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966164 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966195 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966461 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966491 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966508 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966806 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966836 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966855 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967058 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967090 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967300 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967333 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967585 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967616 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967868 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967894 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968098 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968121 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968349 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968373 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968574 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968597 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968776 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968828 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969041 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969063 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969249 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969269 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969466 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969486 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969674 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969695 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970160 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970187 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970408 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970428 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970634 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970656 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970976 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971007 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971228 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971254 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971442 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971465 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971651 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971673 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972552 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972578 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972833 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972856 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973070 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973095 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973311 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973337 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973523 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973545 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973703 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973725 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973967 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973989 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974160 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974183 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974350 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974379 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974628 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974654 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974883 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.038145 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.104875 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.110525 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771151 4769 generic.go:334] "Generic (PLEG): container finished" podID="5426c965-79a4-46ea-b709-949e0a5e3065" containerID="08e06714602e0437c8faa07572c975ea10b2559622327eb668e75ca879a08e8e" exitCode=0 Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771228 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerDied","Data":"08e06714602e0437c8faa07572c975ea10b2559622327eb668e75ca879a08e8e"} Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771295 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"9c03d0d604a1fdcab84ed3e1ccfb05929328f575ebbcd28482da2348e89ffe3b"} Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.777755 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786449 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"4093a2a7e4de81ed357b13c0dd6bda0022fc81bb11e2545dab031db1f97fbfbc"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786770 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"216e5d493ab6a7220a5e0b1b7060228dc5b35f33db0e39260bf16b54571ed24a"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786782 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"7d3f0193784f8def9429bb29a43f8846eb077816e7dc8e432561502f25fa7e28"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786813 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"556c5415aa810a0c23f3ddb28b87a525af8757c4278234ab9dd66732b0ff8ee1"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"4a335007f079f2ed399a0bd85c2fff302757fd7210eb4f8c7d454205b397f5e8"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"b8007a22145ce439a54b7f443d0e5e5a15a425ebb0e71b28a29aede8aff375b4"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.891616 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" path="/var/lib/kubelet/pods/9c028db8-99b9-422d-ba46-e1a2db06ce3c/volumes" Jan 22 13:54:50 crc kubenswrapper[4769]: I0122 13:54:50.802329 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"6cdf34ed3858d7dac5b9f1e6fa20d6c2d49f0852b3b073d23cf4b9e75c3f6e23"} Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.827588 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"2a32d995333c2eacd75275baaa95cf7274a2ae675c2ed6497c2574613548d4f0"} Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.828225 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.828293 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.855966 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.867641 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" podStartSLOduration=7.867624266 podStartE2EDuration="7.867624266s" podCreationTimestamp="2026-01-22 13:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:54:53.865156748 +0000 UTC m=+673.276266697" watchObservedRunningTime="2026-01-22 13:54:53.867624266 +0000 UTC m=+673.278734195" Jan 22 13:54:54 crc kubenswrapper[4769]: I0122 13:54:54.832985 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:54 crc kubenswrapper[4769]: I0122 13:54:54.862419 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:58 crc kubenswrapper[4769]: I0122 13:54:58.883829 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:54:58 crc kubenswrapper[4769]: E0122 13:54:58.884643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:55:13 crc kubenswrapper[4769]: I0122 13:55:13.883625 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:55:14 crc kubenswrapper[4769]: I0122 13:55:14.961011 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:55:14 crc kubenswrapper[4769]: I0122 13:55:14.961542 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"f792b3c29b906b7ea6f4c0ef1e8550b85afba18327b0c1d9f0d5e9adbf131ef2"} Jan 22 13:55:17 crc kubenswrapper[4769]: I0122 13:55:17.066670 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.796393 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.798210 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.800060 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.805609 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904174 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904379 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904443 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005618 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.006216 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.006484 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.028454 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.119265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.340219 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.042972 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="19064cbb406cb69a973f646d395d3f54b43223b566983ec672b8d9a56ee5a4be" exitCode=0 Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.043254 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"19064cbb406cb69a973f646d395d3f54b43223b566983ec672b8d9a56ee5a4be"} Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.043286 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerStarted","Data":"2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa"} Jan 22 13:55:30 crc kubenswrapper[4769]: I0122 13:55:30.055691 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="775c8e064f9886ed088946a1c3372fb6398eec8196bd2d1a4eee646c3050fd6e" exitCode=0 Jan 22 13:55:30 crc kubenswrapper[4769]: I0122 13:55:30.055827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"775c8e064f9886ed088946a1c3372fb6398eec8196bd2d1a4eee646c3050fd6e"} Jan 22 13:55:31 crc kubenswrapper[4769]: I0122 13:55:31.066038 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="5b9fdd30766e2dbe2204dc878f575cb4f8ab94cb3fdf3ac93191b1f5678788b8" exitCode=0 Jan 22 13:55:31 crc kubenswrapper[4769]: I0122 13:55:31.066248 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"5b9fdd30766e2dbe2204dc878f575cb4f8ab94cb3fdf3ac93191b1f5678788b8"} Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.334050 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474447 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474518 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.475750 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle" (OuterVolumeSpecName: "bundle") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.481287 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz" (OuterVolumeSpecName: "kube-api-access-czqjz") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "kube-api-access-czqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.497984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util" (OuterVolumeSpecName: "util") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576602 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576644 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576659 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079090 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa"} Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079402 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa" Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079316 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269128 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269341 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269354 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="util" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269359 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="util" Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269372 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="pull" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269377 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="pull" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269473 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269832 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271603 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271648 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271669 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m2zc9" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.281152 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.423094 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.524443 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.565297 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.628206 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.806795 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: W0122 13:55:35.816018 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9342ab94_785a_427b_84d2_5ac6ff709531.slice/crio-069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c WatchSource:0}: Error finding container 069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c: Status 404 returned error can't find the container with id 069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c Jan 22 13:55:36 crc kubenswrapper[4769]: I0122 13:55:36.095426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" event={"ID":"9342ab94-785a-427b-84d2-5ac6ff709531","Type":"ContainerStarted","Data":"069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c"} Jan 22 13:55:39 crc kubenswrapper[4769]: I0122 13:55:39.123164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" event={"ID":"9342ab94-785a-427b-84d2-5ac6ff709531","Type":"ContainerStarted","Data":"293101b908d042393078034a7a5dcb7e5c47787f3f4afe360f5727515724f08b"} Jan 22 13:55:39 crc kubenswrapper[4769]: I0122 13:55:39.145762 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" podStartSLOduration=1.833228905 podStartE2EDuration="4.14574572s" podCreationTimestamp="2026-01-22 13:55:35 +0000 UTC" firstStartedPulling="2026-01-22 13:55:35.81750133 +0000 UTC m=+715.228611259" lastFinishedPulling="2026-01-22 13:55:38.130018145 +0000 UTC m=+717.541128074" observedRunningTime="2026-01-22 13:55:39.142651203 +0000 UTC m=+718.553761132" watchObservedRunningTime="2026-01-22 13:55:39.14574572 +0000 UTC m=+718.556855649" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.138942 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.140101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.142240 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c7r96" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.150636 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.151347 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.156138 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.160153 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.165667 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.178759 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v6r9x"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.179531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.263235 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.264022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273090 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273239 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zt9n2" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273450 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.282122 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301542 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301624 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301851 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301960 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402703 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402808 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402834 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402832 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402877 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403052 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403114 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: E0122 13:55:40.403343 4769 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 22 13:55:40 crc kubenswrapper[4769]: E0122 13:55:40.403392 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair podName:880459e4-297b-408b-8205-c2197bf19c18 nodeName:}" failed. No retries permitted until 2026-01-22 13:55:40.903374823 +0000 UTC m=+720.314484752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-64j27" (UID: "880459e4-297b-408b-8205-c2197bf19c18") : secret "openshift-nmstate-webhook" not found Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.422575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.422690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.434900 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.455970 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.457925 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.458531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.478480 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.482080 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.482138 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.494075 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504848 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.506184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.508777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.526624 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.585270 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606193 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606242 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606272 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606291 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606325 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606363 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.677251 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: W0122 13:55:40.685728 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9c945e_a392_4a96_8a06_893a09e8dc19.slice/crio-4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03 WatchSource:0}: Error finding container 4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03: Status 404 returned error can't find the container with id 4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03 Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707505 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707532 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707567 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708630 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708780 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.727998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.728854 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.729212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.812028 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.834644 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.913390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.918864 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.067358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.121657 4769 scope.go:117] "RemoveContainer" containerID="3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.140974 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v6r9x" event={"ID":"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec","Type":"ContainerStarted","Data":"e66f3bc9ebb33eaeb4a530134b347879f3218e5ed4f23520253f0c694fc8a18f"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.142814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" event={"ID":"bd1eaf1c-9da8-4372-888f-ed8464d4313d","Type":"ContainerStarted","Data":"a41050fb6fbd73d616919cb58ec9e77609770a44987fa43da1987488b161daa4"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.144080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.211925 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:41 crc kubenswrapper[4769]: W0122 13:55:41.228684 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f692b2_7216_401d_8a55_279589beda2a.slice/crio-451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a WatchSource:0}: Error finding container 451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a: Status 404 returned error can't find the container with id 451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.246832 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:41 crc kubenswrapper[4769]: W0122 13:55:41.263656 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod880459e4_297b_408b_8205_c2197bf19c18.slice/crio-272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc WatchSource:0}: Error finding container 272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc: Status 404 returned error can't find the container with id 272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.157554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" event={"ID":"880459e4-297b-408b-8205-c2197bf19c18","Type":"ContainerStarted","Data":"272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.159433 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5d467dd8-9dd6w" event={"ID":"35f692b2-7216-401d-8a55-279589beda2a","Type":"ContainerStarted","Data":"8c215d89f033807952cc94109893a2deb3a0c11b0ecc1c5495156e88cf3fa24f"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.159466 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5d467dd8-9dd6w" event={"ID":"35f692b2-7216-401d-8a55-279589beda2a","Type":"ContainerStarted","Data":"451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.176026 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d5d467dd8-9dd6w" podStartSLOduration=2.176007981 podStartE2EDuration="2.176007981s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:55:42.175792565 +0000 UTC m=+721.586902494" watchObservedRunningTime="2026-01-22 13:55:42.176007981 +0000 UTC m=+721.587117910" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.171500 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"d371cdda7780e170e717d3cb54842c56594eeda37df861323015ed2a09b1034d"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.172692 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v6r9x" event={"ID":"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec","Type":"ContainerStarted","Data":"cb8f3370fccddcdc502824964de63c781194da673b8dd45aec53cd4d40cd32dc"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.173254 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.176238 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" event={"ID":"bd1eaf1c-9da8-4372-888f-ed8464d4313d","Type":"ContainerStarted","Data":"c1203e1644af8987291fba5f98e354394eaac11495d7d41015064ec135de716a"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.177978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" event={"ID":"880459e4-297b-408b-8205-c2197bf19c18","Type":"ContainerStarted","Data":"e8d1c13a397ffb3088eb9b597d7615a904b01631c7149224133c8bf341a4e101"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.178328 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.206678 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v6r9x" podStartSLOduration=1.230778186 podStartE2EDuration="4.206659588s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.534577018 +0000 UTC m=+719.945686947" lastFinishedPulling="2026-01-22 13:55:43.5104584 +0000 UTC m=+722.921568349" observedRunningTime="2026-01-22 13:55:44.189787717 +0000 UTC m=+723.600897646" watchObservedRunningTime="2026-01-22 13:55:44.206659588 +0000 UTC m=+723.617769517" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.208180 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" podStartSLOduration=1.5302433519999998 podStartE2EDuration="4.208169407s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.842125315 +0000 UTC m=+720.253235244" lastFinishedPulling="2026-01-22 13:55:43.52005136 +0000 UTC m=+722.931161299" observedRunningTime="2026-01-22 13:55:44.203409178 +0000 UTC m=+723.614519117" watchObservedRunningTime="2026-01-22 13:55:44.208169407 +0000 UTC m=+723.619279336" Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.189726 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"de117665bf3196559046ca3868db77b1810705d365e0e73def649688a051f52e"} Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.206607 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" podStartSLOduration=1.303999334 podStartE2EDuration="6.20658188s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.687602668 +0000 UTC m=+720.098712597" lastFinishedPulling="2026-01-22 13:55:45.590185214 +0000 UTC m=+725.001295143" observedRunningTime="2026-01-22 13:55:46.203405801 +0000 UTC m=+725.614515740" watchObservedRunningTime="2026-01-22 13:55:46.20658188 +0000 UTC m=+725.617691809" Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.207217 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" podStartSLOduration=3.947649724 podStartE2EDuration="6.207211056s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:41.265970665 +0000 UTC m=+720.677080594" lastFinishedPulling="2026-01-22 13:55:43.525531957 +0000 UTC m=+722.936641926" observedRunningTime="2026-01-22 13:55:44.248568245 +0000 UTC m=+723.659678184" watchObservedRunningTime="2026-01-22 13:55:46.207211056 +0000 UTC m=+725.618320995" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.520019 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.813568 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.814429 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.821334 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:51 crc kubenswrapper[4769]: I0122 13:55:51.235239 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:51 crc kubenswrapper[4769]: I0122 13:55:51.319597 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:01 crc kubenswrapper[4769]: I0122 13:56:01.073932 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:56:10 crc kubenswrapper[4769]: I0122 13:56:10.482406 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:56:10 crc kubenswrapper[4769]: I0122 13:56:10.482909 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:56:11 crc kubenswrapper[4769]: I0122 13:56:11.580523 4769 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.345313 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.348201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.354079 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.365859 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.370484 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" containerID="cri-o://b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" gracePeriod=15 Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414331 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414355 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515300 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515331 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.516375 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.516559 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.540445 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.669885 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.741678 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nwrtw_9fa4c168-21ea-4f79-a600-7f3c8f656bd0/console/0.log" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.742032 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.863240 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921156 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921197 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921242 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921258 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921479 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921552 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921635 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922260 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922397 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca" (OuterVolumeSpecName: "service-ca") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922562 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922585 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config" (OuterVolumeSpecName: "console-config") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.926711 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.927148 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.927160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc" (OuterVolumeSpecName: "kube-api-access-wt8zc") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "kube-api-access-wt8zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023588 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023653 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023663 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023672 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023681 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023689 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023700 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387304 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="890fbd70b9990cdab67db237f376067a636c58e36804cbc5514e8c0f16624b00" exitCode=0 Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"890fbd70b9990cdab67db237f376067a636c58e36804cbc5514e8c0f16624b00"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerStarted","Data":"5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389284 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nwrtw_9fa4c168-21ea-4f79-a600-7f3c8f656bd0/console/0.log" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389356 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" exitCode=2 Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389385 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerDied","Data":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389410 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerDied","Data":"261bd1091a2577bc464771e7c33703e0f325865e92a22082bfb502ff9ac9d6f2"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389431 4769 scope.go:117] "RemoveContainer" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389506 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.409911 4769 scope.go:117] "RemoveContainer" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: E0122 13:56:17.410318 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": container with ID starting with b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089 not found: ID does not exist" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.410358 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} err="failed to get container status \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": rpc error: code = NotFound desc = could not find container \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": container with ID starting with b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089 not found: ID does not exist" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.422259 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.427167 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.699287 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:18 crc kubenswrapper[4769]: E0122 13:56:18.699945 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.699960 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.700082 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.700786 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.718247 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845023 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845262 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845311 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.891541 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" path="/var/lib/kubelet/pods/9fa4c168-21ea-4f79-a600-7f3c8f656bd0/volumes" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946474 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946619 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946924 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.968389 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.078767 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.403087 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="c152540a14357f8370a3662a02aafa5e7b26afe456e69ee3ad50ed2522eaf692" exitCode=0 Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.403248 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"c152540a14357f8370a3662a02aafa5e7b26afe456e69ee3ad50ed2522eaf692"} Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.489082 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411540 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a" exitCode=0 Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411718 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a"} Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411990 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"106665bdbcb8a203e18701468576d0c52caf4507eea3613063f75100024b19fe"} Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.415755 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="df17619ffdf330b353464dece8965283d3ec4b8b77a08731fe7f06a1c92f3802" exitCode=0 Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.415831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"df17619ffdf330b353464dece8965283d3ec4b8b77a08731fe7f06a1c92f3802"} Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.422592 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f"} Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.626192 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781454 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781591 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781646 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.782653 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle" (OuterVolumeSpecName: "bundle") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.786673 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb" (OuterVolumeSpecName: "kube-api-access-69wvb") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "kube-api-access-69wvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.797143 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util" (OuterVolumeSpecName: "util") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883388 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883712 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883857 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.429592 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.429581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f"} Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.430515 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.431135 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f" exitCode=0 Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.431168 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f"} Jan 22 13:56:23 crc kubenswrapper[4769]: I0122 13:56:23.439225 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375"} Jan 22 13:56:23 crc kubenswrapper[4769]: I0122 13:56:23.456098 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bpmf9" podStartSLOduration=2.764578658 podStartE2EDuration="5.456078784s" podCreationTimestamp="2026-01-22 13:56:18 +0000 UTC" firstStartedPulling="2026-01-22 13:56:20.41613883 +0000 UTC m=+759.827248759" lastFinishedPulling="2026-01-22 13:56:23.107638956 +0000 UTC m=+762.518748885" observedRunningTime="2026-01-22 13:56:23.455559931 +0000 UTC m=+762.866669870" watchObservedRunningTime="2026-01-22 13:56:23.456078784 +0000 UTC m=+762.867188713" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.080031 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.081948 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.139105 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.508182 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:30 crc kubenswrapper[4769]: I0122 13:56:30.487526 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:31 crc kubenswrapper[4769]: I0122 13:56:31.480166 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bpmf9" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" containerID="cri-o://32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" gracePeriod=2 Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.487661 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" exitCode=0 Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.487711 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375"} Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.856707 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871008 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871270 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871293 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871311 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-utilities" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871320 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-utilities" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871332 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="pull" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871340 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="pull" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871357 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871365 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871377 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="util" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871384 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="util" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871393 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-content" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871401 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-content" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871522 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871536 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.872025 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.875586 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879065 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879192 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879331 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6rdbl" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.896745 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025764 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025909 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025937 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026090 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026122 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026147 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities" (OuterVolumeSpecName: "utilities") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.034220 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r" (OuterVolumeSpecName: "kube-api-access-zml5r") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "kube-api-access-zml5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126959 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.127054 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.127072 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.132826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.133333 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.141352 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.142110 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.144728 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-h6ftx" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.144934 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.145072 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.145577 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.148381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.196106 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.227994 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.228062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.228089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329550 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329889 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.334045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.353940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.354005 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.374319 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.435875 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.442619 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:33 crc kubenswrapper[4769]: W0122 13:56:33.447063 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e40742e_231f_4f7b_aa4b_fb58332c3dbe.slice/crio-3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6 WatchSource:0}: Error finding container 3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6: Status 404 returned error can't find the container with id 3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6 Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.482561 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"106665bdbcb8a203e18701468576d0c52caf4507eea3613063f75100024b19fe"} Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495211 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495232 4769 scope.go:117] "RemoveContainer" containerID="32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.497452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" event={"ID":"0e40742e-231f-4f7b-aa4b-fb58332c3dbe","Type":"ContainerStarted","Data":"3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6"} Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.516043 4769 scope.go:117] "RemoveContainer" containerID="395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.524567 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.536932 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.549105 4769 scope.go:117] "RemoveContainer" containerID="730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.702055 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: W0122 13:56:33.709991 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ee84f81_0260_4579_b602_c37bcf5cc7aa.slice/crio-3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586 WatchSource:0}: Error finding container 3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586: Status 404 returned error can't find the container with id 3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586 Jan 22 13:56:34 crc kubenswrapper[4769]: I0122 13:56:34.505814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" event={"ID":"5ee84f81-0260-4579-b602-c37bcf5cc7aa","Type":"ContainerStarted","Data":"3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586"} Jan 22 13:56:34 crc kubenswrapper[4769]: I0122 13:56:34.891708 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" path="/var/lib/kubelet/pods/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35/volumes" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.481967 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.482530 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.482572 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.483094 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.483149 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" gracePeriod=600 Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.548602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" event={"ID":"5ee84f81-0260-4579-b602-c37bcf5cc7aa","Type":"ContainerStarted","Data":"b142d9bc95b974a43acae0c663421bd459fe25de709c8e19a53858942214acd1"} Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.548984 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.551066 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" event={"ID":"0e40742e-231f-4f7b-aa4b-fb58332c3dbe","Type":"ContainerStarted","Data":"d7305f1836274804aee27874d01e68e216e546ee58c63359bf5ad545fb93fa4b"} Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.551204 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.571332 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" podStartSLOduration=1.342811561 podStartE2EDuration="7.571315284s" podCreationTimestamp="2026-01-22 13:56:33 +0000 UTC" firstStartedPulling="2026-01-22 13:56:33.712500108 +0000 UTC m=+773.123610037" lastFinishedPulling="2026-01-22 13:56:39.941003831 +0000 UTC m=+779.352113760" observedRunningTime="2026-01-22 13:56:40.567244363 +0000 UTC m=+779.978354292" watchObservedRunningTime="2026-01-22 13:56:40.571315284 +0000 UTC m=+779.982425213" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.590551 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" podStartSLOduration=2.123635992 podStartE2EDuration="8.590531724s" podCreationTimestamp="2026-01-22 13:56:32 +0000 UTC" firstStartedPulling="2026-01-22 13:56:33.449878474 +0000 UTC m=+772.860988403" lastFinishedPulling="2026-01-22 13:56:39.916774206 +0000 UTC m=+779.327884135" observedRunningTime="2026-01-22 13:56:40.587987411 +0000 UTC m=+779.999097340" watchObservedRunningTime="2026-01-22 13:56:40.590531724 +0000 UTC m=+780.001641653" Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565469 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" exitCode=0 Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565936 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565986 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.566004 4769 scope.go:117] "RemoveContainer" containerID="7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" Jan 22 13:56:53 crc kubenswrapper[4769]: I0122 13:56:53.486427 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:57:13 crc kubenswrapper[4769]: I0122 13:57:13.198814 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.051561 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5vm9t"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.053650 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056325 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056382 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-krt5h" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056442 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.060661 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.061339 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.064256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068831 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069178 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.076337 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.123292 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lwzgw"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.124389 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.127815 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7ccsc" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128209 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128312 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.139178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.139988 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.142706 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170928 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170965 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171080 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171224 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171257 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171285 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171350 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171417 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171905 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172200 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.173088 4769 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.173154 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert podName:82c00d20-0e87-4f34-9cae-d454867c62a0 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.673135229 +0000 UTC m=+814.084245158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert") pod "frr-k8s-webhook-server-7df86c4f6c-9n85j" (UID: "82c00d20-0e87-4f34-9cae-d454867c62a0") : secret "frr-k8s-webhook-server-cert" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.173521 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.187578 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.195922 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.198413 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.205866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272207 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272295 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272360 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272395 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272481 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272669 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272707 4769 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272726 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.772703336 +0000 UTC m=+814.183813265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272784 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.772767867 +0000 UTC m=+814.183877796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "speaker-certs-secret" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.273149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.276041 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.276126 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.286329 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.291448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.294145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.373508 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.454870 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.679102 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.689944 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.705161 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.767528 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"bf9baa78704a8825bcfad0bd10acbef54170e880d8b884e049f12093bc0c6993"} Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.768516 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"97d8cb24efa65ed90003b9c7a6d1f1cbfaa8b88a8d3a2c4bab2c9d1f27b64678"} Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.781012 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.781102 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.781208 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.781249 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:15.781236662 +0000 UTC m=+815.192346591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.786086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.984636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.373907 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:15 crc kubenswrapper[4769]: W0122 13:57:15.378094 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c00d20_0e87_4f34_9cae_d454867c62a0.slice/crio-68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d WatchSource:0}: Error finding container 68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d: Status 404 returned error can't find the container with id 68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.776017 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" event={"ID":"82c00d20-0e87-4f34-9cae-d454867c62a0","Type":"ContainerStarted","Data":"68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.777891 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"2228558587eb0d6c954924fb70ce7853356dc45e5f9c1cc75078a449fc944c51"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.777920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"27d43a10273b2050f21a3ce7386c578bb2fe88fd0491281d4323a945ed721cd1"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.779047 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.798419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.801518 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-qkpds" podStartSLOduration=1.801489282 podStartE2EDuration="1.801489282s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:57:15.796815364 +0000 UTC m=+815.207925293" watchObservedRunningTime="2026-01-22 13:57:15.801489282 +0000 UTC m=+815.212599211" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.811992 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.938636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786478 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"8c40d999d365cfb42d78b2541bff6e59ca12406d42729de3e879460e139fe2a6"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786545 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"2c533603351da98406f4cf0e54ed1e8f6ac61300a2ca9063e969f80c7c28b07b"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"3f8efede264931bfc2e40600bf8f74adefcfbfc12437fff9b34ce8e0d56d11ee"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786835 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.809320 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lwzgw" podStartSLOduration=2.809299792 podStartE2EDuration="2.809299792s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:57:16.808366077 +0000 UTC m=+816.219476006" watchObservedRunningTime="2026-01-22 13:57:16.809299792 +0000 UTC m=+816.220409721" Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.829644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" event={"ID":"82c00d20-0e87-4f34-9cae-d454867c62a0","Type":"ContainerStarted","Data":"8edd3e266a2c9fb36355123a5a006e538490ca45d355b9d3c70071dd251745cb"} Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.830512 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.831534 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="3ddf68a58f5c9fea873fd5bdb5df851b316a079b68820236d3b921cc42eeb630" exitCode=0 Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.831589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"3ddf68a58f5c9fea873fd5bdb5df851b316a079b68820236d3b921cc42eeb630"} Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.849843 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" podStartSLOduration=2.205915818 podStartE2EDuration="8.849826103s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="2026-01-22 13:57:15.382151959 +0000 UTC m=+814.793261888" lastFinishedPulling="2026-01-22 13:57:22.026062244 +0000 UTC m=+821.437172173" observedRunningTime="2026-01-22 13:57:22.845395941 +0000 UTC m=+822.256505880" watchObservedRunningTime="2026-01-22 13:57:22.849826103 +0000 UTC m=+822.260936032" Jan 22 13:57:23 crc kubenswrapper[4769]: I0122 13:57:23.838213 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="0058a5cb70907264a3bd840598d04dfd89eef9277b655c5bb5f7ffcc58fb8c08" exitCode=0 Jan 22 13:57:23 crc kubenswrapper[4769]: I0122 13:57:23.838260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"0058a5cb70907264a3bd840598d04dfd89eef9277b655c5bb5f7ffcc58fb8c08"} Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.459944 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.848197 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="357e5fab1e67b0264e8a717f0893477deaa40ba0df79be454535687b5ef66ab4" exitCode=0 Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.848306 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"357e5fab1e67b0264e8a717f0893477deaa40ba0df79be454535687b5ef66ab4"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859534 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"bf8b25ed283e88706b5eb9bd0a02bd919124739f28b86c203decfb0218d6c207"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859881 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"9e037a9aa0011434366e34154cef2f92ce2cc8ad9eaa421f2629c38e52a6f892"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859901 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"de38ddc38f913217f6bf8e96bb9374b6a83a7f650ab072caf046dc3b6fdcf370"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859909 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"de835798be4622074405bb08ccb35a8938baa18835b1c85228a8cd4dc0d8594d"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859916 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"9226fd54f07624c284f70d71dcff60a1a82bf49fff222edc61df42b1e92935a8"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"5a5899357d3363d687ed9684517a18956c2f5b906036b055d572de360263aaf8"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.885853 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5vm9t" podStartSLOduration=4.355966886 podStartE2EDuration="11.885831127s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="2026-01-22 13:57:14.481348999 +0000 UTC m=+813.892458928" lastFinishedPulling="2026-01-22 13:57:22.01121324 +0000 UTC m=+821.422323169" observedRunningTime="2026-01-22 13:57:25.884943133 +0000 UTC m=+825.296053072" watchObservedRunningTime="2026-01-22 13:57:25.885831127 +0000 UTC m=+825.296941066" Jan 22 13:57:29 crc kubenswrapper[4769]: I0122 13:57:29.374316 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:29 crc kubenswrapper[4769]: I0122 13:57:29.419602 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:34 crc kubenswrapper[4769]: I0122 13:57:34.379076 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:34 crc kubenswrapper[4769]: I0122 13:57:34.989722 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:35 crc kubenswrapper[4769]: I0122 13:57:35.943749 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.216314 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.217459 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.219068 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.219768 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z8tw5" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.222135 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.271631 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.319749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.421371 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.439381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.537220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.943948 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.958923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerStarted","Data":"32c2c017510f58056652d0e7ab9dafab7031691572f95ab6890b77211d93e11e"} Jan 22 13:57:42 crc kubenswrapper[4769]: I0122 13:57:42.793729 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:42 crc kubenswrapper[4769]: I0122 13:57:42.990612 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerStarted","Data":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.004208 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mkxkq" podStartSLOduration=1.601447917 podStartE2EDuration="4.004190292s" podCreationTimestamp="2026-01-22 13:57:39 +0000 UTC" firstStartedPulling="2026-01-22 13:57:39.951913503 +0000 UTC m=+839.363023432" lastFinishedPulling="2026-01-22 13:57:42.354655878 +0000 UTC m=+841.765765807" observedRunningTime="2026-01-22 13:57:43.002852316 +0000 UTC m=+842.413962275" watchObservedRunningTime="2026-01-22 13:57:43.004190292 +0000 UTC m=+842.415300231" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.394410 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.395127 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.403739 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.571226 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.673079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.691729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.713680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.951897 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: W0122 13:57:43.961719 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2d7498a_59be_42c8_913e_d8c8c596828f.slice/crio-18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529 WatchSource:0}: Error finding container 18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529: Status 404 returned error can't find the container with id 18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529 Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.027105 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mkxkq" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" containerID="cri-o://cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" gracePeriod=2 Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.027405 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m6xzn" event={"ID":"a2d7498a-59be-42c8-913e-d8c8c596828f","Type":"ContainerStarted","Data":"18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529"} Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.337171 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.425134 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.430752 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s" (OuterVolumeSpecName: "kube-api-access-p2m5s") pod "b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" (UID: "b06de39c-14ea-4ee9-9e2f-9185d1c2af7b"). InnerVolumeSpecName "kube-api-access-p2m5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.526349 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") on node \"crc\" DevicePath \"\"" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034632 4769 generic.go:334] "Generic (PLEG): container finished" podID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" exitCode=0 Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034696 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034711 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerDied","Data":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034739 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerDied","Data":"32c2c017510f58056652d0e7ab9dafab7031691572f95ab6890b77211d93e11e"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034755 4769 scope.go:117] "RemoveContainer" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.038126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m6xzn" event={"ID":"a2d7498a-59be-42c8-913e-d8c8c596828f","Type":"ContainerStarted","Data":"09bd46dc005e8a125d960a3e212bba6740b4f1e12b65a903b6e3c36f198449fb"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.049520 4769 scope.go:117] "RemoveContainer" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: E0122 13:57:45.050691 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": container with ID starting with cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8 not found: ID does not exist" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.050733 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} err="failed to get container status \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": rpc error: code = NotFound desc = could not find container \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": container with ID starting with cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8 not found: ID does not exist" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.056724 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.061824 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.067952 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m6xzn" podStartSLOduration=1.9862938190000001 podStartE2EDuration="2.067937147s" podCreationTimestamp="2026-01-22 13:57:43 +0000 UTC" firstStartedPulling="2026-01-22 13:57:43.972278959 +0000 UTC m=+843.383388888" lastFinishedPulling="2026-01-22 13:57:44.053922267 +0000 UTC m=+843.465032216" observedRunningTime="2026-01-22 13:57:45.067210397 +0000 UTC m=+844.478320326" watchObservedRunningTime="2026-01-22 13:57:45.067937147 +0000 UTC m=+844.479047076" Jan 22 13:57:46 crc kubenswrapper[4769]: I0122 13:57:46.892589 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" path="/var/lib/kubelet/pods/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b/volumes" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.714190 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.714623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.744447 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:54 crc kubenswrapper[4769]: I0122 13:57:54.118645 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.406201 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: E0122 13:57:56.407110 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.407135 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.407342 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.408767 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.419980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493847 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493876 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595461 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.596180 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.596215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.619010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.726986 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.951272 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: W0122 13:57:56.958293 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19e34c89_b2d2_4bd3_a9b1_eff968aefea7.slice/crio-2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b WatchSource:0}: Error finding container 2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b: Status 404 returned error can't find the container with id 2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b Jan 22 13:57:57 crc kubenswrapper[4769]: I0122 13:57:57.111186 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerStarted","Data":"2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b"} Jan 22 13:57:58 crc kubenswrapper[4769]: I0122 13:57:58.119154 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" exitCode=0 Jan 22 13:57:58 crc kubenswrapper[4769]: I0122 13:57:58.119212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6"} Jan 22 13:57:59 crc kubenswrapper[4769]: I0122 13:57:59.130221 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" exitCode=0 Jan 22 13:57:59 crc kubenswrapper[4769]: I0122 13:57:59.130291 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2"} Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.030092 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.031841 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.034092 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vtwvl" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.043756 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.140904 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerStarted","Data":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141817 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.164430 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf99m" podStartSLOduration=2.7386126109999998 podStartE2EDuration="4.164415598s" podCreationTimestamp="2026-01-22 13:57:56 +0000 UTC" firstStartedPulling="2026-01-22 13:57:58.120892455 +0000 UTC m=+857.532002394" lastFinishedPulling="2026-01-22 13:57:59.546695422 +0000 UTC m=+858.957805381" observedRunningTime="2026-01-22 13:58:00.162485818 +0000 UTC m=+859.573595747" watchObservedRunningTime="2026-01-22 13:58:00.164415598 +0000 UTC m=+859.575525527" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243062 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243170 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243619 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.260708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.390517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.774443 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148121 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="84f76c48335d3300281282bed6e5d5410b7b65ceadfd7de286855f47cedb1ddf" exitCode=0 Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"84f76c48335d3300281282bed6e5d5410b7b65ceadfd7de286855f47cedb1ddf"} Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148433 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerStarted","Data":"9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4"} Jan 22 13:58:02 crc kubenswrapper[4769]: I0122 13:58:02.167567 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="e82024a8ed83f437850abd823180c33a14c69bdac45a7c97bc85801c44fe4add" exitCode=0 Jan 22 13:58:02 crc kubenswrapper[4769]: I0122 13:58:02.167668 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"e82024a8ed83f437850abd823180c33a14c69bdac45a7c97bc85801c44fe4add"} Jan 22 13:58:03 crc kubenswrapper[4769]: I0122 13:58:03.186209 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="2a705bd50a434df768b1e6946a1bad83acaaac3593937a8650f6fd00ee6bfee8" exitCode=0 Jan 22 13:58:03 crc kubenswrapper[4769]: I0122 13:58:03.186555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"2a705bd50a434df768b1e6946a1bad83acaaac3593937a8650f6fd00ee6bfee8"} Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.432850 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495876 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495935 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495957 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.497215 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle" (OuterVolumeSpecName: "bundle") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.501429 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv" (OuterVolumeSpecName: "kube-api-access-9pdjv") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "kube-api-access-9pdjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.509738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util" (OuterVolumeSpecName: "util") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597779 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597900 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597912 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4"} Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201345 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4" Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201414 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.727347 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.727876 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.788414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:07 crc kubenswrapper[4769]: I0122 13:58:07.255259 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.452670 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453244 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453259 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453274 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="pull" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453280 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="pull" Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453293 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="util" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453300 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="util" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453407 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453835 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.457518 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qkrbx" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.488596 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.548833 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.591356 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.650406 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.668716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.775190 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:09 crc kubenswrapper[4769]: I0122 13:58:09.223474 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf99m" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" containerID="cri-o://32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" gracePeriod=2 Jan 22 13:58:09 crc kubenswrapper[4769]: I0122 13:58:09.230282 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.108703 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172536 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172598 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172628 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.173535 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities" (OuterVolumeSpecName: "utilities") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.177910 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc" (OuterVolumeSpecName: "kube-api-access-r76nc") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "kube-api-access-r76nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.182598 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.182636 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.196661 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.230473 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" event={"ID":"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6","Type":"ContainerStarted","Data":"c21357eb21f14705f81f6e0a52164ba4dfaea6d84839a44ef65b7b41522cbb28"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233766 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" exitCode=0 Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233840 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233857 4769 scope.go:117] "RemoveContainer" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233880 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.251232 4769 scope.go:117] "RemoveContainer" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.262688 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.269876 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.283630 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.287398 4769 scope.go:117] "RemoveContainer" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.317170 4769 scope.go:117] "RemoveContainer" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.319492 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": container with ID starting with 32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e not found: ID does not exist" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.319546 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} err="failed to get container status \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": rpc error: code = NotFound desc = could not find container \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": container with ID starting with 32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.319576 4769 scope.go:117] "RemoveContainer" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.320015 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": container with ID starting with 2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2 not found: ID does not exist" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320086 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2"} err="failed to get container status \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": rpc error: code = NotFound desc = could not find container \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": container with ID starting with 2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2 not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320129 4769 scope.go:117] "RemoveContainer" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.320503 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": container with ID starting with 13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6 not found: ID does not exist" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320550 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6"} err="failed to get container status \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": rpc error: code = NotFound desc = could not find container \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": container with ID starting with 13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6 not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.894916 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" path="/var/lib/kubelet/pods/19e34c89-b2d2-4bd3-a9b1-eff968aefea7/volumes" Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.273565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" event={"ID":"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6","Type":"ContainerStarted","Data":"41d130a51a375bacfd08438e3b3dda9d87e38aa7e29fbe6a9290bbec5e09c848"} Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.274230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.310035 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" podStartSLOduration=2.314095205 podStartE2EDuration="6.310008308s" podCreationTimestamp="2026-01-22 13:58:08 +0000 UTC" firstStartedPulling="2026-01-22 13:58:09.242217444 +0000 UTC m=+868.653327373" lastFinishedPulling="2026-01-22 13:58:13.238130547 +0000 UTC m=+872.649240476" observedRunningTime="2026-01-22 13:58:14.302002539 +0000 UTC m=+873.713112508" watchObservedRunningTime="2026-01-22 13:58:14.310008308 +0000 UTC m=+873.721118257" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.778195 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868050 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868262 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868272 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868287 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-utilities" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868293 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-utilities" Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868303 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-content" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868310 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-content" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868415 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.869431 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.897506 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011443 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.112926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113026 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113633 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113662 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.137877 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.187907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.522067 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321719 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d" exitCode=0 Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321759 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d"} Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321838 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"1ac9d31a1466ddc11a2d3ca5584af4c7f38778847f983d6cd1e3693f55b65e45"} Jan 22 13:58:21 crc kubenswrapper[4769]: I0122 13:58:21.328299 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836"} Jan 22 13:58:22 crc kubenswrapper[4769]: I0122 13:58:22.334274 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836" exitCode=0 Jan 22 13:58:22 crc kubenswrapper[4769]: I0122 13:58:22.334318 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836"} Jan 22 13:58:23 crc kubenswrapper[4769]: I0122 13:58:23.359352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80"} Jan 22 13:58:23 crc kubenswrapper[4769]: I0122 13:58:23.382304 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgq6q" podStartSLOduration=2.933216006 podStartE2EDuration="5.382278008s" podCreationTimestamp="2026-01-22 13:58:18 +0000 UTC" firstStartedPulling="2026-01-22 13:58:20.323372435 +0000 UTC m=+879.734482354" lastFinishedPulling="2026-01-22 13:58:22.772434437 +0000 UTC m=+882.183544356" observedRunningTime="2026-01-22 13:58:23.381971499 +0000 UTC m=+882.793081518" watchObservedRunningTime="2026-01-22 13:58:23.382278008 +0000 UTC m=+882.793387937" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.188596 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.189239 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.234891 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.440594 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.483155 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.404232 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgq6q" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" containerID="cri-o://3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" gracePeriod=2 Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.890581 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.892168 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.916511 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992274 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992392 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094059 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094146 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094644 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094702 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.121303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.208776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.514952 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:33 crc kubenswrapper[4769]: W0122 13:58:33.071916 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf4cf7c_e696_4123_af54_e8f96242dea3.slice/crio-0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7 WatchSource:0}: Error finding container 0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7: Status 404 returned error can't find the container with id 0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419339 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8" exitCode=0 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419476 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419732 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerStarted","Data":"0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.424479 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" exitCode=0 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.424530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.609018 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655265 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655344 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655371 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.656907 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities" (OuterVolumeSpecName: "utilities") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.657063 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.661049 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj" (OuterVolumeSpecName: "kube-api-access-spsxj") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "kube-api-access-spsxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.700503 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.758363 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.758394 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.431386 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6" exitCode=0 Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.431536 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6"} Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"1ac9d31a1466ddc11a2d3ca5584af4c7f38778847f983d6cd1e3693f55b65e45"} Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434723 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434830 4769 scope.go:117] "RemoveContainer" containerID="3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.467457 4769 scope.go:117] "RemoveContainer" containerID="47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.483586 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.486934 4769 scope.go:117] "RemoveContainer" containerID="f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.501805 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.890165 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" path="/var/lib/kubelet/pods/c9017724-ecca-4b60-89eb-c21ac37ad9fd/volumes" Jan 22 13:58:35 crc kubenswrapper[4769]: I0122 13:58:35.444286 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerStarted","Data":"cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333"} Jan 22 13:58:35 crc kubenswrapper[4769]: I0122 13:58:35.459346 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hslhq" podStartSLOduration=3.002467977 podStartE2EDuration="4.459331343s" podCreationTimestamp="2026-01-22 13:58:31 +0000 UTC" firstStartedPulling="2026-01-22 13:58:33.423961383 +0000 UTC m=+892.835071322" lastFinishedPulling="2026-01-22 13:58:34.880824759 +0000 UTC m=+894.291934688" observedRunningTime="2026-01-22 13:58:35.456993882 +0000 UTC m=+894.868103811" watchObservedRunningTime="2026-01-22 13:58:35.459331343 +0000 UTC m=+894.870441272" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.481510 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482286 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-content" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482298 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-content" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482315 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-utilities" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482321 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-utilities" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482334 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482453 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482870 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.485503 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jcqt2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509191 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509237 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509895 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.520209 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.520882 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.521941 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nvqlt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.523920 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.523977 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.524011 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.524693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9tkrs" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.534681 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.535506 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.545902 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2wkst" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.550422 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.560125 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.573540 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.574556 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.578284 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cppgt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.579825 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.580702 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.588251 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7b6pf" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.601042 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.609276 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.619497 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626299 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626350 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626387 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626417 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626457 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.631688 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.632442 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.637203 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.640513 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2drt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.640687 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.655072 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.663668 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.669929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.681039 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.690318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wpg5l" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.703479 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.713879 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738134 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738206 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.757813 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.758586 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.759850 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.762339 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xcl4h" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.772592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.792854 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.796467 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.797223 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.799610 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zr2bd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.806438 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.807264 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.808958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nm9km" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.810949 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.812132 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.814868 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.815507 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.817005 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-smdsm" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.817272 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z9ctc" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.819362 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.823512 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.828949 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839731 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839856 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.840491 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.846528 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.853485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.860090 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.861081 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.866190 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.866551 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-c6mn2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.869934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.874018 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.880058 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.881720 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.881991 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.882908 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sn876" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.884463 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.890175 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p88l8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.890405 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.897254 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.897316 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.898015 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.899721 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-glwh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.904068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.904191 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.921629 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.922613 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.933844 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.935037 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.939842 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nb5bz" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.943340 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.943627 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944376 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944464 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944499 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944519 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944559 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.945739 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.945781 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:39.445766329 +0000 UTC m=+898.856876248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.978757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.982400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.029698 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.030563 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.033527 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v76vj" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.039187 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045736 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045779 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045851 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045935 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.046021 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.046062 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.057615 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.070159 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.076475 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.077536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.079519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.092490 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.136985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.154965 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155241 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155280 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155357 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.156165 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.157463 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.157538 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:39.657519044 +0000 UTC m=+899.068628973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.162129 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.170022 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.171526 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.172950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.175737 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mwwp4" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.181281 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.185192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.190150 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.194414 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.190484 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.202597 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.202838 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.209090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.214855 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.215485 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.237087 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.252631 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.252870 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-r848c" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274301 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274567 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.346537 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.348697 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.349679 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.373576 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hlb79" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.375932 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.376116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.376325 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.377038 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.377089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.387355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.408726 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487685 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487716 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487764 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.490264 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.490331 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.490311029 +0000 UTC m=+899.901420958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.505119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.511836 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.512672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.520397 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lw4v5" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.541352 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.558593 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.599783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605084 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605136 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605197 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.10517629 +0000 UTC m=+899.516286219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605278 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605503 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605775 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.105763166 +0000 UTC m=+899.516873095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.612873 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.644776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.666972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.706583 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.706632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.706891 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.706943 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.70692882 +0000 UTC m=+900.118038749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.728134 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.829101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.959450 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.987720 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.994914 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.013527 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.018721 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.120437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.121208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121445 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121519 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:41.121502455 +0000 UTC m=+900.532612384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121834 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121868 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:41.121858824 +0000 UTC m=+900.532968753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.481677 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.481739 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.483962 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.486390 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d08194_af60_4614_b425_1b45340cd73b.slice/crio-8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451 WatchSource:0}: Error finding container 8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451: Status 404 returned error can't find the container with id 8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451 Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.498775 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.506376 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.519964 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" event={"ID":"d8d08194-af60-4614-b425-1b45340cd73b","Type":"ContainerStarted","Data":"8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.521447 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.526093 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" event={"ID":"ae11ee9d-5ccf-490d-b457-294820d6a337","Type":"ContainerStarted","Data":"799998ea08e0e9bbfd48036a0c80aa79d93566022d40f3b7b707499213319f26"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.528019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.529686 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.529748 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:42.529729776 +0000 UTC m=+901.940839705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.530989 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" event={"ID":"ebd5834b-ef11-40bb-9d15-6878767e7bef","Type":"ContainerStarted","Data":"c349c0257cd7a9326d3d87df3ce033e911cfd3472e4d28d3efc7de87efe40657"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.538167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" event={"ID":"7d908338-dcdc-4423-b719-02d30f3834ed","Type":"ContainerStarted","Data":"5ef13771deecc8c309d7762f6963cf36a214998b36dc692db2640ecda3261740"} Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.540245 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8217a619_751c_4d07_a96c_ce3208f08e84.slice/crio-1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a WatchSource:0}: Error finding container 1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a: Status 404 returned error can't find the container with id 1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.542560 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c6369d9_2ecf_4187_bb10_76bde13ecd5d.slice/crio-cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3 WatchSource:0}: Error finding container cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3: Status 404 returned error can't find the container with id cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3 Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.545075 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.547585 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" event={"ID":"c367fcfb-38d9-4834-970d-7004d16c8249","Type":"ContainerStarted","Data":"b5785ce3c0ec2d8279f80e9310d8e179645d336badfcdb99c1cda8aa102ff702"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.559546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" event={"ID":"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049","Type":"ContainerStarted","Data":"6b52b5800978ebeeb1c45b8d6a8cd5f94d3285a287bed1bc73b9e9c33a62ec35"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.561371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" event={"ID":"141f0476-23eb-4a43-a4ac-4d33c12bfb5b","Type":"ContainerStarted","Data":"b73001b0e9c2fbacf92a624cb9c8f69eae961c7638f8808b7207a3d6134f8f92"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.562197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" event={"ID":"c6b325d8-50c6-411a-bc7f-938b284f0efb","Type":"ContainerStarted","Data":"0ae85f4387bb09d6be1023e705a1beb47cf034173e4e3ef9f8ce2a4b79bd3fb9"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.562955 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" event={"ID":"d40b03ae-0991-4364-85f3-89cf5e8d5686","Type":"ContainerStarted","Data":"b56191106aeb936cd96b008014ab64102c13e10ce2dff5f478db4fec28fa8141"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.585008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.599846 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.605448 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.610747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.615048 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59dbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-b2w8p_openstack-operators(31021ae3-dbb7-4ceb-8737-31052d849f0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.616187 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.616599 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.616663 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldb9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-jbtsm_openstack-operators(d931ff7f-f554-4249-bc34-2cd09fc97427): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.618287 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619058 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znk26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-mwhh9_openstack-operators(80a16478-da8a-4d2f-89df-163fada49abe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619446 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tg9m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hv48h_openstack-operators(14005034-1ce8-4d62-afbc-66cd1d0d9be1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619576 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r95kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-prfwv_openstack-operators(11299941-70c0-41a8-ad9c-5c4648c3aa95): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620187 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620487 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.621634 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnphp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ctf5z_openstack-operators(f13c0d19-4c14-4897-bbc5-5c220d207e41): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.622962 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.625919 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.631370 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.738190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.738380 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.738452 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:42.73843188 +0000 UTC m=+902.149541809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.146728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.147111 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147243 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147292 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:43.147275236 +0000 UTC m=+902.558385165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147640 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147679 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:43.147659836 +0000 UTC m=+902.558769765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.573037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" event={"ID":"ed1198a5-a7fa-4ab4-9656-8e9700deec37","Type":"ContainerStarted","Data":"404cb91568c372461bba865aeb8b5fe1b216c271d1652940359fb48dab557cb3"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.574554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" event={"ID":"3d8a97d6-e3bd-49e0-bc78-024286cce303","Type":"ContainerStarted","Data":"1560051fd9396015c3821b45a37ac2eb5f38df31f66186026e831be5db48b178"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.575945 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" event={"ID":"8217a619-751c-4d07-a96c-ce3208f08e84","Type":"ContainerStarted","Data":"1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.577580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" event={"ID":"11299941-70c0-41a8-ad9c-5c4648c3aa95","Type":"ContainerStarted","Data":"78a36011e50eeea129f34b1d97d83c27efe609521c55b88920169e70d818d533"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.583181 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.583182 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" event={"ID":"80a16478-da8a-4d2f-89df-163fada49abe","Type":"ContainerStarted","Data":"28afbba3d9e8a3dd073b655e22ecfea05e5436d84c43581420e67d363507ba3d"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.586333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.586562 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" event={"ID":"31021ae3-dbb7-4ceb-8737-31052d849f0a","Type":"ContainerStarted","Data":"b9149c2c462ac76241b7958b988412ef09cf6085d8a01901aff67b47c8d763c0"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.587854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.587892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" event={"ID":"d931ff7f-f554-4249-bc34-2cd09fc97427","Type":"ContainerStarted","Data":"ddc61e35bd61dede929a152277955adafeb3ff8ce918aec58cc9f7b823b8336a"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.589313 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.589835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" event={"ID":"f13c0d19-4c14-4897-bbc5-5c220d207e41","Type":"ContainerStarted","Data":"148747892a47776f1b0cb5f392e6cacf2f02648d0926bebde9daafc560a42863"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.590859 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" event={"ID":"a32a1e6f-004c-4675-abed-10078b43492a","Type":"ContainerStarted","Data":"c4e99c31781ef758d4fb4f4acc26b08431f5b29c047db8d9d0677ce02a928a4e"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.591013 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.593427 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" event={"ID":"3c6369d9-2ecf-4187-bb10-76bde13ecd5d","Type":"ContainerStarted","Data":"cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.594584 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" event={"ID":"14005034-1ce8-4d62-afbc-66cd1d0d9be1","Type":"ContainerStarted","Data":"f27a66b4d9c86597d51f5e04be69641aa97a3f921f3d9981d997cb29bcc706d9"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.596231 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.209836 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.209891 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.267857 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.571352 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.571585 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.571656 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:46.571630286 +0000 UTC m=+905.982740215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.619854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.619898 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.620294 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.620710 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.621854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.622947 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.722202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.775533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.775734 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.775849 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:46.775829824 +0000 UTC m=+906.186939743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.781044 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:43 crc kubenswrapper[4769]: I0122 13:58:43.183326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:43 crc kubenswrapper[4769]: I0122 13:58:43.183440 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183583 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183603 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183641 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:47.183620912 +0000 UTC m=+906.594730841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183692 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:47.183681484 +0000 UTC m=+906.594791423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:44 crc kubenswrapper[4769]: I0122 13:58:44.628909 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hslhq" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" containerID="cri-o://cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" gracePeriod=2 Jan 22 13:58:45 crc kubenswrapper[4769]: I0122 13:58:45.647077 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" exitCode=0 Jan 22 13:58:45 crc kubenswrapper[4769]: I0122 13:58:45.647109 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333"} Jan 22 13:58:46 crc kubenswrapper[4769]: I0122 13:58:46.635489 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.635683 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.635756 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:54.635739146 +0000 UTC m=+914.046849095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: I0122 13:58:46.838420 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.838605 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.838693 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:54.83867586 +0000 UTC m=+914.249785799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: I0122 13:58:47.245227 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:47 crc kubenswrapper[4769]: I0122 13:58:47.245695 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245448 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245904 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:55.245877123 +0000 UTC m=+914.656987262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245751 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.246033 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:55.246000806 +0000 UTC m=+914.657110735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.210564 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211530 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211932 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211961 4769 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hslhq" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.640669 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.651096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.842645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:54 crc kubenswrapper[4769]: E0122 13:58:54.842881 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:54 crc kubenswrapper[4769]: E0122 13:58:54.842969 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:59:10.842945111 +0000 UTC m=+930.254055040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.906463 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2drt" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.915209 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.248312 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.248766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:55 crc kubenswrapper[4769]: E0122 13:58:55.248925 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:55 crc kubenswrapper[4769]: E0122 13:58:55.249010 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:59:11.248990681 +0000 UTC m=+930.660100610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.253538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.285863 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.286166 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9r67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-fzz6p_openstack-operators(8217a619-751c-4d07-a96c-ce3208f08e84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.287586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podUID="8217a619-751c-4d07-a96c-ce3208f08e84" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.713596 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podUID="8217a619-751c-4d07-a96c-ce3208f08e84" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.836272 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.836494 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-rlcb9_openstack-operators(c6b325d8-50c6-411a-bc7f-938b284f0efb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.837976 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podUID="c6b325d8-50c6-411a-bc7f-938b284f0efb" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.513860 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.515003 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bt5bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-ttb7f_openstack-operators(3d8a97d6-e3bd-49e0-bc78-024286cce303): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.516391 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podUID="3d8a97d6-e3bd-49e0-bc78-024286cce303" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.719238 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podUID="c6b325d8-50c6-411a-bc7f-938b284f0efb" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.719424 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podUID="3d8a97d6-e3bd-49e0-bc78-024286cce303" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.445876 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.446156 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-brq9d_openstack-operators(d40b03ae-0991-4364-85f3-89cf5e8d5686): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.447618 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podUID="d40b03ae-0991-4364-85f3-89cf5e8d5686" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.726018 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podUID="d40b03ae-0991-4364-85f3-89cf5e8d5686" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.560817 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.561247 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttq9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-x8dvt_openstack-operators(ebd5834b-ef11-40bb-9d15-6878767e7bef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.562361 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podUID="ebd5834b-ef11-40bb-9d15-6878767e7bef" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.614508 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.635874 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.636011 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.636058 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.641352 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities" (OuterVolumeSpecName: "utilities") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.656895 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf" (OuterVolumeSpecName: "kube-api-access-d4nxf") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "kube-api-access-d4nxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.728194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741553 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741587 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741597 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753375 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7"} Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753434 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753471 4769 scope.go:117] "RemoveContainer" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.757016 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podUID="ebd5834b-ef11-40bb-9d15-6878767e7bef" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.790642 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.795669 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.891832 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" path="/var/lib/kubelet/pods/8bf4cf7c-e696-4123-af54-e8f96242dea3/volumes" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.331423 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.331961 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-f2klg_openstack-operators(d8d08194-af60-4614-b425-1b45340cd73b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.333153 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podUID="d8d08194-af60-4614-b425-1b45340cd73b" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.767586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podUID="d8d08194-af60-4614-b425-1b45340cd73b" Jan 22 13:59:06 crc kubenswrapper[4769]: I0122 13:59:06.936616 4769 scope.go:117] "RemoveContainer" containerID="ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.335719 4769 scope.go:117] "RemoveContainer" containerID="7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.744042 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.799704 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" event={"ID":"ae11ee9d-5ccf-490d-b457-294820d6a337","Type":"ContainerStarted","Data":"ad7ec24d398406d1040ff7f36144f2a8ca799d9beebc3696ccd828dc5260dc4f"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.800626 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.802314 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" event={"ID":"a32a1e6f-004c-4675-abed-10078b43492a","Type":"ContainerStarted","Data":"c8df860d085292707a94865925bc76f74eb2adf5f3b264b32862738bb2757fce"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.802811 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.826134 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" event={"ID":"3c6369d9-2ecf-4187-bb10-76bde13ecd5d","Type":"ContainerStarted","Data":"7a32e1edeefff72ca7ad2bea005d634c3017c761de4476668101d38d375c7823"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.826284 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.834126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" event={"ID":"c367fcfb-38d9-4834-970d-7004d16c8249","Type":"ContainerStarted","Data":"ff8a471d8799793a319e5c9a7f14a0b49fad3533484e2fe58f7f47cbb46aa5b2"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.834771 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.853600 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" podStartSLOduration=6.590314652 podStartE2EDuration="29.853582591s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.774627442 +0000 UTC m=+899.185737371" lastFinishedPulling="2026-01-22 13:59:03.037895381 +0000 UTC m=+922.449005310" observedRunningTime="2026-01-22 13:59:07.82800703 +0000 UTC m=+927.239116959" watchObservedRunningTime="2026-01-22 13:59:07.853582591 +0000 UTC m=+927.264692520" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.858155 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" podStartSLOduration=7.346294349 podStartE2EDuration="29.85813363s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.526044519 +0000 UTC m=+899.937154438" lastFinishedPulling="2026-01-22 13:59:03.03788379 +0000 UTC m=+922.448993719" observedRunningTime="2026-01-22 13:59:07.853470718 +0000 UTC m=+927.264580657" watchObservedRunningTime="2026-01-22 13:59:07.85813363 +0000 UTC m=+927.269243559" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.862536 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" event={"ID":"ed1198a5-a7fa-4ab4-9656-8e9700deec37","Type":"ContainerStarted","Data":"621d9d45842fa5ef8fa011440ec24b62fbd43b5ab35143315d77bcf3d9cfeaea"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.863369 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.864710 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" event={"ID":"13c33fdb-b388-4fdf-996c-544286f47a73","Type":"ContainerStarted","Data":"3c9258ff3e30066454f1e0fe0b06fcab9da82c786502c650c4f2b7365b9e3fb2"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.879609 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" podStartSLOduration=7.387949936 podStartE2EDuration="29.879591032s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.546381518 +0000 UTC m=+899.957491447" lastFinishedPulling="2026-01-22 13:59:03.038022614 +0000 UTC m=+922.449132543" observedRunningTime="2026-01-22 13:59:07.875869355 +0000 UTC m=+927.286979284" watchObservedRunningTime="2026-01-22 13:59:07.879591032 +0000 UTC m=+927.290700961" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.965650 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" podStartSLOduration=7.533872041 podStartE2EDuration="29.965631767s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.606098844 +0000 UTC m=+900.017208773" lastFinishedPulling="2026-01-22 13:59:03.03785857 +0000 UTC m=+922.448968499" observedRunningTime="2026-01-22 13:59:07.964872357 +0000 UTC m=+927.375982296" watchObservedRunningTime="2026-01-22 13:59:07.965631767 +0000 UTC m=+927.376741696" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.967187 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" podStartSLOduration=7.090781224 podStartE2EDuration="29.967178948s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.161459416 +0000 UTC m=+899.572569345" lastFinishedPulling="2026-01-22 13:59:03.03785714 +0000 UTC m=+922.448967069" observedRunningTime="2026-01-22 13:59:07.931378019 +0000 UTC m=+927.342487978" watchObservedRunningTime="2026-01-22 13:59:07.967178948 +0000 UTC m=+927.378288897" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.875945 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" event={"ID":"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049","Type":"ContainerStarted","Data":"29cb0068743d3e2ec1ba622ac6694b5c995ea608c7b9a9bc35fa9f03a07b266d"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.876012 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.878870 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" event={"ID":"f13c0d19-4c14-4897-bbc5-5c220d207e41","Type":"ContainerStarted","Data":"71ad5f08943929d364c3557c81b7f32f75166746528ec9d87f97c8d6e587c9d9"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.879056 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.881054 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" event={"ID":"14005034-1ce8-4d62-afbc-66cd1d0d9be1","Type":"ContainerStarted","Data":"eda1a43523bb7d2a34ca9fd4426880d617840cc51357f657f90c8add1f4fb7b2"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.892959 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" event={"ID":"11299941-70c0-41a8-ad9c-5c4648c3aa95","Type":"ContainerStarted","Data":"ad2f145ab6dc28c07b31645d823a995628fed4f7b6114497dcd9ca97ae3728bc"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.893148 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.896175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" event={"ID":"7d908338-dcdc-4423-b719-02d30f3834ed","Type":"ContainerStarted","Data":"bca7f6294445bc9a0d140e2f39f10fb05c60d067a781dd29b6e4a4c1638298ae"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.896256 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.897932 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" podStartSLOduration=7.570401095 podStartE2EDuration="30.897920247s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.71041375 +0000 UTC m=+899.121523679" lastFinishedPulling="2026-01-22 13:59:03.037932902 +0000 UTC m=+922.449042831" observedRunningTime="2026-01-22 13:59:08.896366776 +0000 UTC m=+928.307476715" watchObservedRunningTime="2026-01-22 13:59:08.897920247 +0000 UTC m=+928.309030176" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.898770 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" event={"ID":"8217a619-751c-4d07-a96c-ce3208f08e84","Type":"ContainerStarted","Data":"25be5054df9f1b99c2fb0aef13520fcde4eabe101c359d90267fdf8a547f1cfd"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.899492 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.900890 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" event={"ID":"141f0476-23eb-4a43-a4ac-4d33c12bfb5b","Type":"ContainerStarted","Data":"5918743ed5b448c2a8f37e9bc67f1fded7d5f4c1000b1596a0f23dea4d83035b"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.901279 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.902905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" event={"ID":"80a16478-da8a-4d2f-89df-163fada49abe","Type":"ContainerStarted","Data":"2de4c10f55c8e21ae16eae53c51b1df9c1e5401445367aa40dd68be1ad708e72"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.903237 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.905016 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" event={"ID":"31021ae3-dbb7-4ceb-8737-31052d849f0a","Type":"ContainerStarted","Data":"d20d82b0dc1aec4cf3c84014da525ae4fb07ab88e03bd7cebbeb7b830cdfa553"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.905308 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.906967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" event={"ID":"d931ff7f-f554-4249-bc34-2cd09fc97427","Type":"ContainerStarted","Data":"e4b9e080024c42102937a028460c06374487901e7f2a970d08b8687992c15919"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.907394 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.924466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podStartSLOduration=3.03295855 podStartE2EDuration="29.924443701s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.61938328 +0000 UTC m=+900.030493209" lastFinishedPulling="2026-01-22 13:59:07.510868431 +0000 UTC m=+926.921978360" observedRunningTime="2026-01-22 13:59:08.920897259 +0000 UTC m=+928.332007198" watchObservedRunningTime="2026-01-22 13:59:08.924443701 +0000 UTC m=+928.335553630" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.947871 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podStartSLOduration=4.106021145 podStartE2EDuration="30.947842965s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.621505085 +0000 UTC m=+900.032615014" lastFinishedPulling="2026-01-22 13:59:07.463326915 +0000 UTC m=+926.874436834" observedRunningTime="2026-01-22 13:59:08.942411572 +0000 UTC m=+928.353521511" watchObservedRunningTime="2026-01-22 13:59:08.947842965 +0000 UTC m=+928.358952894" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.969100 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" podStartSLOduration=7.956433789 podStartE2EDuration="30.969080681s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.025348531 +0000 UTC m=+899.436458460" lastFinishedPulling="2026-01-22 13:59:03.037995423 +0000 UTC m=+922.449105352" observedRunningTime="2026-01-22 13:59:08.967618463 +0000 UTC m=+928.378728402" watchObservedRunningTime="2026-01-22 13:59:08.969080681 +0000 UTC m=+928.380190620" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.984456 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podStartSLOduration=4.268121433 podStartE2EDuration="30.984438863s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.619518194 +0000 UTC m=+900.030628123" lastFinishedPulling="2026-01-22 13:59:07.335835624 +0000 UTC m=+926.746945553" observedRunningTime="2026-01-22 13:59:08.983276123 +0000 UTC m=+928.394386062" watchObservedRunningTime="2026-01-22 13:59:08.984438863 +0000 UTC m=+928.395548792" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.003844 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podStartSLOduration=3.306402186 podStartE2EDuration="30.003821881s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.614928973 +0000 UTC m=+900.026038912" lastFinishedPulling="2026-01-22 13:59:07.312348678 +0000 UTC m=+926.723458607" observedRunningTime="2026-01-22 13:59:08.997225798 +0000 UTC m=+928.408335727" watchObservedRunningTime="2026-01-22 13:59:09.003821881 +0000 UTC m=+928.414931810" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.021199 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podStartSLOduration=4.174284416 podStartE2EDuration="31.021178227s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.616503595 +0000 UTC m=+900.027613524" lastFinishedPulling="2026-01-22 13:59:07.463397406 +0000 UTC m=+926.874507335" observedRunningTime="2026-01-22 13:59:09.018108366 +0000 UTC m=+928.429218285" watchObservedRunningTime="2026-01-22 13:59:09.021178227 +0000 UTC m=+928.432288156" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.058651 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podStartSLOduration=4.21345964 podStartE2EDuration="31.058634218s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.618980759 +0000 UTC m=+900.030090688" lastFinishedPulling="2026-01-22 13:59:07.464155337 +0000 UTC m=+926.875265266" observedRunningTime="2026-01-22 13:59:09.055890706 +0000 UTC m=+928.467000635" watchObservedRunningTime="2026-01-22 13:59:09.058634218 +0000 UTC m=+928.469744147" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.093502 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" podStartSLOduration=7.776490474 podStartE2EDuration="31.093486731s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.720915554 +0000 UTC m=+899.132025473" lastFinishedPulling="2026-01-22 13:59:03.037911801 +0000 UTC m=+922.449021730" observedRunningTime="2026-01-22 13:59:09.08925758 +0000 UTC m=+928.500367509" watchObservedRunningTime="2026-01-22 13:59:09.093486731 +0000 UTC m=+928.504596660" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.112061 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podStartSLOduration=3.9988791089999998 podStartE2EDuration="31.112043977s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.551958754 +0000 UTC m=+899.963068683" lastFinishedPulling="2026-01-22 13:59:07.665123622 +0000 UTC m=+927.076233551" observedRunningTime="2026-01-22 13:59:09.104344486 +0000 UTC m=+928.515454405" watchObservedRunningTime="2026-01-22 13:59:09.112043977 +0000 UTC m=+928.523153906" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.482172 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.482513 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.903857 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.920523 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.924230 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" event={"ID":"13c33fdb-b388-4fdf-996c-544286f47a73","Type":"ContainerStarted","Data":"7bc9efabe45c34437909b125f12d6fc6ec395ccc5f1264594b0ca1c7198350b2"} Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.924387 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.926662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" event={"ID":"3d8a97d6-e3bd-49e0-bc78-024286cce303","Type":"ContainerStarted","Data":"681d24f063b3e61adc895b535f0dcc78df7f1de119487182b35fd46bb0132143"} Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.927091 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.951574 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" podStartSLOduration=30.164625941 podStartE2EDuration="32.95155052s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:59:07.766056637 +0000 UTC m=+927.177166566" lastFinishedPulling="2026-01-22 13:59:10.552981216 +0000 UTC m=+929.964091145" observedRunningTime="2026-01-22 13:59:10.945649195 +0000 UTC m=+930.356759134" watchObservedRunningTime="2026-01-22 13:59:10.95155052 +0000 UTC m=+930.362660449" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.021750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sn876" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.030923 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.323169 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.331293 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.440684 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podStartSLOduration=3.410656971 podStartE2EDuration="33.440666377s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.520044863 +0000 UTC m=+899.931154782" lastFinishedPulling="2026-01-22 13:59:10.550054259 +0000 UTC m=+929.961164188" observedRunningTime="2026-01-22 13:59:10.967505578 +0000 UTC m=+930.378615517" watchObservedRunningTime="2026-01-22 13:59:11.440666377 +0000 UTC m=+930.851776296" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.444390 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:59:11 crc kubenswrapper[4769]: W0122 13:59:11.449388 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b0a07de_4458_4970_a304_a608625bdebf.slice/crio-6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6 WatchSource:0}: Error finding container 6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6: Status 404 returned error can't find the container with id 6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6 Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.563004 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hlb79" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.571868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.934680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" event={"ID":"d40b03ae-0991-4364-85f3-89cf5e8d5686","Type":"ContainerStarted","Data":"5c7e365f66b93d50321f79dcfec06dc0b8ff2c5b45694d6f9f9d52cbb2246ead"} Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.935330 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.937179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" event={"ID":"2b0a07de-4458-4970-a304-a608625bdebf","Type":"ContainerStarted","Data":"6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6"} Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.953508 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podStartSLOduration=2.789367998 podStartE2EDuration="33.953490235s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.10708059 +0000 UTC m=+899.518190519" lastFinishedPulling="2026-01-22 13:59:11.271202827 +0000 UTC m=+930.682312756" observedRunningTime="2026-01-22 13:59:11.951931114 +0000 UTC m=+931.363041043" watchObservedRunningTime="2026-01-22 13:59:11.953490235 +0000 UTC m=+931.364600154" Jan 22 13:59:12 crc kubenswrapper[4769]: I0122 13:59:12.010896 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:59:12 crc kubenswrapper[4769]: I0122 13:59:12.943821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" event={"ID":"a2bbc43c-9feb-4287-9e35-6f100c6644f6","Type":"ContainerStarted","Data":"f4e37806e6527062db89529eef98d005defcffc5552dda969c9d0b0ed2d49f3d"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.952174 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" event={"ID":"a2bbc43c-9feb-4287-9e35-6f100c6644f6","Type":"ContainerStarted","Data":"66b32dd0d9268ff4a1b61e4321a3d9e00c1ab00f45e00aad22cb81d48102627b"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.953545 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.959395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" event={"ID":"ebd5834b-ef11-40bb-9d15-6878767e7bef","Type":"ContainerStarted","Data":"490eeea26278e03b32ca9f561648ce2054d428fd80235000f234383ad8c07695"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.959640 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.960982 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" event={"ID":"c6b325d8-50c6-411a-bc7f-938b284f0efb","Type":"ContainerStarted","Data":"42c9aff5afd5ce55f8aec69b06fac67459da53bfa3c6146529cc21fbf0d8bc1d"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.961176 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.979186 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" podStartSLOduration=34.979169416 podStartE2EDuration="34.979169416s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:59:13.973769084 +0000 UTC m=+933.384879043" watchObservedRunningTime="2026-01-22 13:59:13.979169416 +0000 UTC m=+933.390279345" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.993129 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podStartSLOduration=2.629379915 podStartE2EDuration="35.993112571s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.107420148 +0000 UTC m=+899.518530077" lastFinishedPulling="2026-01-22 13:59:13.471152804 +0000 UTC m=+932.882262733" observedRunningTime="2026-01-22 13:59:13.992187307 +0000 UTC m=+933.403297256" watchObservedRunningTime="2026-01-22 13:59:13.993112571 +0000 UTC m=+933.404222500" Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.007241 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podStartSLOduration=2.34845952 podStartE2EDuration="36.007226401s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.779464269 +0000 UTC m=+899.190574198" lastFinishedPulling="2026-01-22 13:59:13.43823115 +0000 UTC m=+932.849341079" observedRunningTime="2026-01-22 13:59:14.005730062 +0000 UTC m=+933.416839991" watchObservedRunningTime="2026-01-22 13:59:14.007226401 +0000 UTC m=+933.418336330" Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.968229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" event={"ID":"2b0a07de-4458-4970-a304-a608625bdebf","Type":"ContainerStarted","Data":"c66f2eec601af87c23748c91b258843ae01fb9d65a536001625263bef5a7a158"} Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.998289 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" podStartSLOduration=33.862026637 podStartE2EDuration="36.998263239s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:59:11.451448659 +0000 UTC m=+930.862558588" lastFinishedPulling="2026-01-22 13:59:14.587685261 +0000 UTC m=+933.998795190" observedRunningTime="2026-01-22 13:59:14.990329352 +0000 UTC m=+934.401439321" watchObservedRunningTime="2026-01-22 13:59:14.998263239 +0000 UTC m=+934.409373208" Jan 22 13:59:15 crc kubenswrapper[4769]: I0122 13:59:15.975225 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:16 crc kubenswrapper[4769]: I0122 13:59:16.981589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" event={"ID":"d8d08194-af60-4614-b425-1b45340cd73b","Type":"ContainerStarted","Data":"ef70237bd566ba26725c3391c44cdb17bffd3c1620a42bb5531d8b8c7f1b88af"} Jan 22 13:59:16 crc kubenswrapper[4769]: I0122 13:59:16.982620 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.857552 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.877104 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.880676 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podStartSLOduration=5.076775172 podStartE2EDuration="40.880658803s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.49690269 +0000 UTC m=+899.908012619" lastFinishedPulling="2026-01-22 13:59:16.300786321 +0000 UTC m=+935.711896250" observedRunningTime="2026-01-22 13:59:16.997439396 +0000 UTC m=+936.408549335" watchObservedRunningTime="2026-01-22 13:59:18.880658803 +0000 UTC m=+938.291768732" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.894114 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.909408 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.928310 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.950478 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.065377 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.158959 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.159414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.177976 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.193522 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.213020 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.241285 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.255902 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.280709 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.391434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.507609 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.832997 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:59:21 crc kubenswrapper[4769]: I0122 13:59:21.038320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:21 crc kubenswrapper[4769]: I0122 13:59:21.580857 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:24 crc kubenswrapper[4769]: I0122 13:59:24.922756 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:59:29 crc kubenswrapper[4769]: I0122 13:59:29.140234 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.482646 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.483248 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.483300 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.484001 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.484067 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" gracePeriod=600 Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181021 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" exitCode=0 Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181487 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181507 4769 scope.go:117] "RemoveContainer" containerID="3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.196700 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202869 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202919 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202931 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-utilities" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202940 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-utilities" Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202962 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-content" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202970 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-content" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.203118 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.204022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.206872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207101 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207116 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207201 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qpvwm" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207101 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.223635 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.223724 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.261930 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.263151 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.265177 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.276364 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324521 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324566 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324590 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.325429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.341341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.426724 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.426878 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.446708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.527800 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.581839 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.024686 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:48 crc kubenswrapper[4769]: W0122 13:59:48.032097 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31fc43cb_0b18_49b4_a19b_6047e962f742.slice/crio-4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9 WatchSource:0}: Error finding container 4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9: Status 404 returned error can't find the container with id 4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9 Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.032345 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.035011 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:59:48 crc kubenswrapper[4769]: W0122 13:59:48.040520 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba28aa8_af6e_4b05_b308_1a5d989da923.slice/crio-03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269 WatchSource:0}: Error finding container 03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269: Status 404 returned error can't find the container with id 03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269 Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.210492 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" event={"ID":"8ba28aa8-af6e-4b05-b308-1a5d989da923","Type":"ContainerStarted","Data":"03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269"} Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.213287 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" event={"ID":"31fc43cb-0b18-49b4-a19b-6047e962f742","Type":"ContainerStarted","Data":"4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9"} Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.160944 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.181876 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.183108 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.194876 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.384989 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.385054 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.385114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.473120 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486618 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.487460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.487461 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.501139 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.502459 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.517612 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.520886 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587472 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587867 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587892 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688621 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.689769 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.689849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.708307 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.808077 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.850469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.298299 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:51 crc kubenswrapper[4769]: W0122 13:59:51.308773 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51a7d68_4414_4157_ab31_b5ee67a26b87.slice/crio-cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f WatchSource:0}: Error finding container cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f: Status 404 returned error can't find the container with id cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.319976 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.321119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.323560 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.323948 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324044 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324168 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324259 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324369 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324544 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zm2vm" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.342353 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: W0122 13:59:51.367876 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e6c47fe_34e3_498e_a488_96efc7e689b0.slice/crio-8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb WatchSource:0}: Error finding container 8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb: Status 404 returned error can't find the container with id 8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.373619 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501900 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501931 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501955 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502065 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502086 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502115 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502185 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604136 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604159 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604289 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604309 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604359 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604383 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604703 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605108 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605910 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.608581 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.608812 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.611141 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.611531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.612895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.614891 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.628776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.638297 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.639907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.643940 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5c97b" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.644519 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.645070 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.645937 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.649878 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.649950 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.650025 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.657570 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.658467 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.680750 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.806744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.806838 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807057 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807096 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807123 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807176 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807205 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807236 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807295 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.908929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909308 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909343 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909427 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909556 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910010 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910390 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910045 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910806 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910853 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.911290 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.912746 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.917849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.918158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.918215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.923384 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.924479 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.927895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.945519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.064342 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.209584 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.247889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerStarted","Data":"8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb"} Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.249294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerStarted","Data":"cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f"} Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.805104 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.807656 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.810590 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-txspp" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.810841 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.811006 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.813055 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.816997 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.823122 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935207 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935388 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935428 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037282 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037343 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037377 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037436 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037502 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037525 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037972 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.041220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.042806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.043234 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.043616 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.050025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.051560 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.060111 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.062054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.169854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.196374 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.198282 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.202285 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203446 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nztd5" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203629 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203806 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.209737 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354478 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354652 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354721 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354828 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354942 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456884 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456935 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457070 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457259 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457516 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.458052 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.458631 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.459134 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.462929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.481781 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.482764 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.486369 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.491757 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492021 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hjfvp" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492154 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492809 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.509266 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.510501 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.529901 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557862 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557905 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557956 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.558024 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659203 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659311 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.660244 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.660273 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.668559 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.673414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.691207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.861968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.345521 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"6d72a769611a46bdb1768f4e9380f28bb2a07dc2061ec5bd95716855943febe1"} Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.735271 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.736167 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.743194 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-x6wmz" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.749454 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.896855 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.997918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:57 crc kubenswrapper[4769]: I0122 13:59:57.017132 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:57 crc kubenswrapper[4769]: I0122 13:59:57.062307 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.867178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.869380 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.873386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9hrbg" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.873467 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.875527 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.876465 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.879292 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.892050 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.900920 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944672 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944860 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944890 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945434 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945475 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945496 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945513 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945557 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046507 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046568 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046595 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046614 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046629 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046722 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046836 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046874 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046982 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047221 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047246 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047344 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047444 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047518 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.049010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.051783 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.054195 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.058336 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.072519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.078685 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.156450 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.157601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.160373 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.161258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.165513 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.206306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.216501 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252359 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252479 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252520 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.352319 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353739 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353766 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353807 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.354839 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.357554 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.357819 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.358119 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-66v6p" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.359405 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.359459 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.360109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.370131 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.371635 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455195 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455347 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455660 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455745 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.489718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557471 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557609 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557633 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558263 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558392 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558759 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557833 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.559250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.562371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.564060 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.579991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.580056 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.582459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.714582 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.963615 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.967242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971118 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971144 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tgkpr" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.976591 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.978044 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109509 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109569 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109842 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109886 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109927 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.211892 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212416 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212664 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.213460 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.225713 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.230183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.235908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.235908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.236158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.236236 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.239954 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.247809 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.295757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.804248 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.804409 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hwccv_openstack(31fc43cb-0b18-49b4-a19b-6047e962f742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.806140 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" podUID="31fc43cb-0b18-49b4-a19b-6047e962f742" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.843656 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.843867 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr8r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8mfxs_openstack(8ba28aa8-af6e-4b05-b308-1a5d989da923): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.845123 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" podUID="8ba28aa8-af6e-4b05-b308-1a5d989da923" Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.306311 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.332318 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod048fbe43_0fef_46e8_bc9d_038c96a4696c.slice/crio-799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b WatchSource:0}: Error finding container 799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b: Status 404 returned error can't find the container with id 799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.332362 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.406852 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.417239 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.418050 4769 generic.go:334] "Generic (PLEG): container finished" podID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" exitCode=0 Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.418097 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.421608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.422912 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.446945 4769 generic.go:334] "Generic (PLEG): container finished" podID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" exitCode=0 Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.447695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895"} Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.463257 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa5525a_0eb2_487f_8721_3ef58f5df4aa.slice/crio-29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb WatchSource:0}: Error finding container 29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb: Status 404 returned error can't find the container with id 29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.463566 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5478968_e798_44de_b3ed_632864fc0607.slice/crio-77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a WatchSource:0}: Error finding container 77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a: Status 404 returned error can't find the container with id 77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.630372 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.726873 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.741533 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.849812 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.894336 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.992171 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 14:00:06 crc kubenswrapper[4769]: I0122 14:00:06.455116 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a"} Jan 22 14:00:06 crc kubenswrapper[4769]: I0122 14:00:06.456644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3aa5525a-0eb2-487f-8721-3ef58f5df4aa","Type":"ContainerStarted","Data":"29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb"} Jan 22 14:00:06 crc kubenswrapper[4769]: W0122 14:00:06.863659 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6b8be2_7370_47ca_843b_1dea67d837c3.slice/crio-69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d WatchSource:0}: Error finding container 69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d: Status 404 returned error can't find the container with id 69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d Jan 22 14:00:06 crc kubenswrapper[4769]: W0122 14:00:06.886133 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7ce269_d7ec_4db1_aab3_b22da5d56c6e.slice/crio-e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba WatchSource:0}: Error finding container e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba: Status 404 returned error can't find the container with id e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.076322 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.158926 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209255 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209343 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209412 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209839 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config" (OuterVolumeSpecName: "config") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210573 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210588 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.309223 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7" (OuterVolumeSpecName: "kube-api-access-tr8r7") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "kube-api-access-tr8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.311999 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"31fc43cb-0b18-49b4-a19b-6047e962f742\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312062 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"31fc43cb-0b18-49b4-a19b-6047e962f742\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312606 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config" (OuterVolumeSpecName: "config") pod "31fc43cb-0b18-49b4-a19b-6047e962f742" (UID: "31fc43cb-0b18-49b4-a19b-6047e962f742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.409117 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg" (OuterVolumeSpecName: "kube-api-access-9h4dg") pod "31fc43cb-0b18-49b4-a19b-6047e962f742" (UID: "31fc43cb-0b18-49b4-a19b-6047e962f742"). InnerVolumeSpecName "kube-api-access-9h4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.414034 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.414065 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.466602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerStarted","Data":"d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.467860 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerStarted","Data":"cb0f27b9c3686fd6437f8bd8519d2239c1ac22e630bed57eba5dc3bb400528c4"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.469452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"5bdfa8a3a5389929b46e2cca659be0dd29437e092c8f665d2fc10c73fde2ca38"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.470871 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.472224 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" event={"ID":"8ba28aa8-af6e-4b05-b308-1a5d989da923","Type":"ContainerDied","Data":"03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.475061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" event={"ID":"31fc43cb-0b18-49b4-a19b-6047e962f742","Type":"ContainerDied","Data":"4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.475154 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.480983 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"8e253edc1258b967da233f5f102d23b1d6d8b7632597b5713a378395c8c4aa76"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.483104 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk" event={"ID":"db7ce269-d7ec-4db1-aab3-b22da5d56c6e","Type":"ContainerStarted","Data":"e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.487815 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.726833 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.731966 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.746185 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.753385 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.500271 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.503111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerStarted","Data":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.503192 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.505948 4769 generic.go:334] "Generic (PLEG): container finished" podID="b8a0650e-6e96-491e-88df-d228be8155e1" containerID="b13faa7bdb54d2f31f81f30cd670139cd9b89adfb82f77120bfad2d5527962d2" exitCode=0 Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.506022 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerDied","Data":"b13faa7bdb54d2f31f81f30cd670139cd9b89adfb82f77120bfad2d5527962d2"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.508186 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerStarted","Data":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.508335 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.510530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.569466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" podStartSLOduration=4.907553848 podStartE2EDuration="18.569444314s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:51.369826625 +0000 UTC m=+970.780936554" lastFinishedPulling="2026-01-22 14:00:05.031717081 +0000 UTC m=+984.442827020" observedRunningTime="2026-01-22 14:00:08.55634477 +0000 UTC m=+987.967454699" watchObservedRunningTime="2026-01-22 14:00:08.569444314 +0000 UTC m=+987.980554243" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.582274 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podStartSLOduration=4.859843879 podStartE2EDuration="18.58225542s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:51.311351713 +0000 UTC m=+970.722461642" lastFinishedPulling="2026-01-22 14:00:05.033763254 +0000 UTC m=+984.444873183" observedRunningTime="2026-01-22 14:00:08.577655099 +0000 UTC m=+987.988765038" watchObservedRunningTime="2026-01-22 14:00:08.58225542 +0000 UTC m=+987.993365349" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.893505 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fc43cb-0b18-49b4-a19b-6047e962f742" path="/var/lib/kubelet/pods/31fc43cb-0b18-49b4-a19b-6047e962f742/volumes" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.893986 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba28aa8-af6e-4b05-b308-1a5d989da923" path="/var/lib/kubelet/pods/8ba28aa8-af6e-4b05-b308-1a5d989da923/volumes" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.838013 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914551 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914610 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914694 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.915522 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.920802 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4" (OuterVolumeSpecName: "kube-api-access-57ts4") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "kube-api-access-57ts4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.922036 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016237 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016278 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016288 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559157 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerDied","Data":"d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3"} Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559188 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559224 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3" Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.568701 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175"} Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.576221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3aa5525a-0eb2-487f-8721-3ef58f5df4aa","Type":"ContainerStarted","Data":"21548a6c8213d484e0dd4fe09e62fb75dcdebf16d0f5d31b09b1149303916de6"} Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.576374 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.619063 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.87032325 podStartE2EDuration="20.619028427s" podCreationTimestamp="2026-01-22 13:59:54 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.466027612 +0000 UTC m=+984.877137541" lastFinishedPulling="2026-01-22 14:00:13.214732789 +0000 UTC m=+992.625842718" observedRunningTime="2026-01-22 14:00:14.614755415 +0000 UTC m=+994.025865344" watchObservedRunningTime="2026-01-22 14:00:14.619028427 +0000 UTC m=+994.030138346" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.590515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk" event={"ID":"db7ce269-d7ec-4db1-aab3-b22da5d56c6e","Type":"ContainerStarted","Data":"ba45903685c9d50a9fa25dd56749b192901d5d4436b77f70c03fd2036ec364d5"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.591858 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.592089 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6b8be2-7370-47ca-843b-1dea67d837c3" containerID="4d7fdba300b46601763a56f3d07345d0392d08985f1061796bbcbc2dfb3c74f3" exitCode=0 Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.592172 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerDied","Data":"4d7fdba300b46601763a56f3d07345d0392d08985f1061796bbcbc2dfb3c74f3"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.594214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerStarted","Data":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.595081 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.597518 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"3959ddc84318de0ab65be59c34120b53236f9e6ac62d7c1f9f0c130530676e02"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.599547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.601148 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"121328f30451daebae9d2c6e8c47cd3fc593781f2d600f51a2d4a1bf39d37dfd"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.613809 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ljbrk" podStartSLOduration=9.661007793 podStartE2EDuration="16.613777003s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.887675585 +0000 UTC m=+986.298785514" lastFinishedPulling="2026-01-22 14:00:13.840444805 +0000 UTC m=+993.251554724" observedRunningTime="2026-01-22 14:00:15.607483428 +0000 UTC m=+995.018593367" watchObservedRunningTime="2026-01-22 14:00:15.613777003 +0000 UTC m=+995.024886932" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.665713 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.180047359 podStartE2EDuration="19.665694814s" podCreationTimestamp="2026-01-22 13:59:56 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.8775945 +0000 UTC m=+986.288704429" lastFinishedPulling="2026-01-22 14:00:14.363241945 +0000 UTC m=+993.774351884" observedRunningTime="2026-01-22 14:00:15.664831911 +0000 UTC m=+995.075941840" watchObservedRunningTime="2026-01-22 14:00:15.665694814 +0000 UTC m=+995.076804743" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.809947 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.862531 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.930520 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"403630aeb0a046af747092fbec28b3c7a35d4d9a9f94b0b704c9179e90ab6e7d"} Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614345 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"8a8365e1cba25eb5c1285c7e161f8031425bf00f5bef1e99a8d7cc080522c76d"} Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614467 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" containerID="cri-o://a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" gracePeriod=10 Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.648835 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-57w6l" podStartSLOduration=11.144977178 podStartE2EDuration="17.648813044s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.865931105 +0000 UTC m=+986.277041024" lastFinishedPulling="2026-01-22 14:00:13.369766961 +0000 UTC m=+992.780876890" observedRunningTime="2026-01-22 14:00:16.644764768 +0000 UTC m=+996.055874717" watchObservedRunningTime="2026-01-22 14:00:16.648813044 +0000 UTC m=+996.059922973" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.179520 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321437 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321524 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321628 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.328208 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn" (OuterVolumeSpecName: "kube-api-access-sjjzn") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "kube-api-access-sjjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.366977 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.368510 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config" (OuterVolumeSpecName: "config") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424404 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424755 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424771 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623405 4769 generic.go:334] "Generic (PLEG): container finished" podID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" exitCode=0 Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623501 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623521 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb"} Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623566 4769 scope.go:117] "RemoveContainer" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623845 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623859 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.658402 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.664954 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:18 crc kubenswrapper[4769]: I0122 14:00:18.663002 4769 scope.go:117] "RemoveContainer" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:18 crc kubenswrapper[4769]: I0122 14:00:18.891345 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" path="/var/lib/kubelet/pods/0e6c47fe-34e3-498e-a488-96efc7e689b0/volumes" Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.667225 4769 generic.go:334] "Generic (PLEG): container finished" podID="d5478968-e798-44de-b3ed-632864fc0607" containerID="0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175" exitCode=0 Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.667260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerDied","Data":"0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175"} Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.862887 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:20.691095 4769 generic.go:334] "Generic (PLEG): container finished" podID="048fbe43-0fef-46e8-bc9d-038c96a4696c" containerID="54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb" exitCode=0 Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:20.691131 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerDied","Data":"54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb"} Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.583319 4769 scope.go:117] "RemoveContainer" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:26 crc kubenswrapper[4769]: E0122 14:00:26.584341 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": container with ID starting with a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54 not found: ID does not exist" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584390 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} err="failed to get container status \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": rpc error: code = NotFound desc = could not find container \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": container with ID starting with a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54 not found: ID does not exist" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584420 4769 scope.go:117] "RemoveContainer" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:26 crc kubenswrapper[4769]: E0122 14:00:26.584668 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": container with ID starting with 84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989 not found: ID does not exist" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584697 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989"} err="failed to get container status \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": rpc error: code = NotFound desc = could not find container \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": container with ID starting with 84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989 not found: ID does not exist" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.081441 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087165 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="init" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087193 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="init" Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087222 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087229 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087246 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087253 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087447 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087469 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.088407 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.098542 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.101461 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.277744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.277986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.278137 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380445 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380483 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.381806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.382390 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.399173 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.410627 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.744964 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"1d7a9c196c826197a35b4dc8d806edfa528f1331f96842069a37f1b52fa7dc55"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.747041 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"c75905ece5affa6d47506c893319fe219eb68f7809a10fee02bad716f88a9936"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.748950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"330233cec66a5cad330a9043a8a7e1a16cf6c2ea3faaad17a73fbe3e5bcace85"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.750609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"3650a732270b43b89a87a9e0d4bc365b089e9e8dc1fe46e3ea657c3ab8a54ef6"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.774080 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.564157146 podStartE2EDuration="34.774058009s" podCreationTimestamp="2026-01-22 13:59:53 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.354079698 +0000 UTC m=+984.765189617" lastFinishedPulling="2026-01-22 14:00:13.563980541 +0000 UTC m=+992.975090480" observedRunningTime="2026-01-22 14:00:27.77257053 +0000 UTC m=+1007.183680539" watchObservedRunningTime="2026-01-22 14:00:27.774058009 +0000 UTC m=+1007.185167938" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.800290 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.914321322 podStartE2EDuration="36.800268946s" podCreationTimestamp="2026-01-22 13:59:51 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.483835508 +0000 UTC m=+984.894945437" lastFinishedPulling="2026-01-22 14:00:13.369783132 +0000 UTC m=+992.780893061" observedRunningTime="2026-01-22 14:00:27.795917551 +0000 UTC m=+1007.207027500" watchObservedRunningTime="2026-01-22 14:00:27.800268946 +0000 UTC m=+1007.211378875" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.819534 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.996894271 podStartE2EDuration="28.81951411s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.881152964 +0000 UTC m=+986.292262893" lastFinishedPulling="2026-01-22 14:00:26.703772803 +0000 UTC m=+1006.114882732" observedRunningTime="2026-01-22 14:00:27.816545692 +0000 UTC m=+1007.227655621" watchObservedRunningTime="2026-01-22 14:00:27.81951411 +0000 UTC m=+1007.230624039" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.833651 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.847825 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.029520856 podStartE2EDuration="25.847805782s" podCreationTimestamp="2026-01-22 14:00:02 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.868700777 +0000 UTC m=+986.279810716" lastFinishedPulling="2026-01-22 14:00:26.686985713 +0000 UTC m=+1006.098095642" observedRunningTime="2026-01-22 14:00:27.846496308 +0000 UTC m=+1007.257606237" watchObservedRunningTime="2026-01-22 14:00:27.847805782 +0000 UTC m=+1007.258915711" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.263677 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.272655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.275528 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sfs6t" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.275544 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.280900 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.281373 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.282080 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.295993 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.346889 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397357 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397748 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397784 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499903 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499979 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499999 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500062 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500081 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500127 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:29.000107254 +0000 UTC m=+1008.411217183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500314 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.506940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.515671 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.522907 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.756977 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.758289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.761532 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.761917 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.762207 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770245 4769 generic.go:334] "Generic (PLEG): container finished" podID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerID="787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32" exitCode=0 Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32"} Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770425 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerStarted","Data":"25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795"} Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.771164 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.775622 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.831557 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904677 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904746 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904906 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904943 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904970 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008246 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008521 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008616 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008661 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008711 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008753 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009541 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009666 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009690 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009739 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:30.009714188 +0000 UTC m=+1009.420824117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009051 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.010841 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.012702 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.020405 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.024722 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.033096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.094605 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.124958 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.126818 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.130298 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.131248 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.134650 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.135498 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.191014 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.191280 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.197447 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212075 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212239 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212311 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212634 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212680 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314584 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314603 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314746 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314824 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.315255 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.315871 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.316210 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.316264 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.318313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.318535 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.321646 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.325029 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.334512 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.342424 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.387353 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.387888 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.421034 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.422405 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.449725 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.468188 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.536390 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.619292 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.627628 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.627940 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628113 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: W0122 14:00:29.656095 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf13b9a7b_6f5e_48fd_8d95_3beb851e9819.slice/crio-895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0 WatchSource:0}: Error finding container 895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0: Status 404 returned error can't find the container with id 895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0 Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729757 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729967 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.730002 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.730892 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.731681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.731709 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.732238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.758374 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.782420 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.795076 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerStarted","Data":"9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2"} Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.795284 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.796927 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerStarted","Data":"895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0"} Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.815682 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" podStartSLOduration=2.815663267 podStartE2EDuration="2.815663267s" podCreationTimestamp="2026-01-22 14:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:29.81271732 +0000 UTC m=+1009.223827249" watchObservedRunningTime="2026-01-22 14:00:29.815663267 +0000 UTC m=+1009.226773196" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.037251 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037468 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037482 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037524 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:32.03750951 +0000 UTC m=+1011.448619439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.075661 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.078444 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd778948b_7654_48d1_8be2_edd924d70ad5.slice/crio-0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158 WatchSource:0}: Error finding container 0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158: Status 404 returned error can't find the container with id 0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.138704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.144764 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbba9b5e_2f1d_4a3a_930e_c835070aefe9.slice/crio-8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa WatchSource:0}: Error finding container 8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa: Status 404 returned error can't find the container with id 8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.351723 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.371117 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650dfc14_f283_4318_b6bc_4b17cdea15fa.slice/crio-a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225 WatchSource:0}: Error finding container a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225: Status 404 returned error can't find the container with id a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.714925 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.715230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.765349 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808248 4769 generic.go:334] "Generic (PLEG): container finished" podID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" exitCode=0 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808325 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerStarted","Data":"a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.810811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2ndkt" event={"ID":"cbba9b5e-2f1d-4a3a-930e-c835070aefe9","Type":"ContainerStarted","Data":"ae1074e9c91d88a635053fb81b0de6149a7e3bd018551d04f068eba718a3841c"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.810889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2ndkt" event={"ID":"cbba9b5e-2f1d-4a3a-930e-c835070aefe9","Type":"ContainerStarted","Data":"8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813283 4769 generic.go:334] "Generic (PLEG): container finished" podID="d778948b-7654-48d1-8be2-edd924d70ad5" containerID="590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da" exitCode=0 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerDied","Data":"590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813438 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerStarted","Data":"0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813516 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" containerID="cri-o://9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" gracePeriod=10 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.900220 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2ndkt" podStartSLOduration=1.900199897 podStartE2EDuration="1.900199897s" podCreationTimestamp="2026-01-22 14:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:30.884634568 +0000 UTC m=+1010.295744497" watchObservedRunningTime="2026-01-22 14:00:30.900199897 +0000 UTC m=+1010.311309826" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.940909 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.176759 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.180816 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.184284 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.184902 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jg78z" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.185117 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.185245 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.188414 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.194479 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261223 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261718 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262126 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262543 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.364096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.365478 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366366 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366699 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366899 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367600 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367893 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.368034 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.368594 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.369132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.369530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.370741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.373672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.373695 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.375028 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6" (OuterVolumeSpecName: "kube-api-access-cp4n6") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "kube-api-access-cp4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.387626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.389842 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.390340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config" (OuterVolumeSpecName: "config") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.406027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.473840 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475224 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475238 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475692 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.502928 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.821577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerStarted","Data":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.822980 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826654 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerDied","Data":"0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826705 4769 scope.go:117] "RemoveContainer" containerID="590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826867 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.833925 4769 generic.go:334] "Generic (PLEG): container finished" podID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerID="9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" exitCode=0 Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834027 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834068 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834083 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.840769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.846288 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-twczw" podStartSLOduration=2.8462519459999998 podStartE2EDuration="2.846251946s" podCreationTimestamp="2026-01-22 14:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:31.841072721 +0000 UTC m=+1011.252182650" watchObservedRunningTime="2026-01-22 14:00:31.846251946 +0000 UTC m=+1011.257361875" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.905626 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.924867 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991614 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991762 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.002930 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd" (OuterVolumeSpecName: "kube-api-access-6fpmd") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "kube-api-access-6fpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.047393 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.048297 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config" (OuterVolumeSpecName: "config") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094472 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094631 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094649 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094662 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094775 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094811 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094862 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:36.094843071 +0000 UTC m=+1015.505953000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.111263 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.840500 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.869236 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.874282 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.891257 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" path="/var/lib/kubelet/pods/9ccf209c-9829-41bd-af53-26ea82e6c9e0/volumes" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.892001 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" path="/var/lib/kubelet/pods/d778948b-7654-48d1-8be2-edd924d70ad5/volumes" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.171181 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.172015 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.300336 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: W0122 14:00:33.904028 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d5b8f0_b7c1_4eeb_9b49_85b0240d28df.slice/crio-318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa WatchSource:0}: Error finding container 318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa: Status 404 returned error can't find the container with id 318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.943552 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.454514 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455461 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.455602 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455716 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.455817 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455985 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456084 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456354 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456756 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.457678 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.460066 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.468844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.520833 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.522195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.529329 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.530783 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.530877 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.561772 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.561861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.600434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662874 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662952 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.663941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.683526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.764209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.764267 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.767341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.781250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.806616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.845236 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.846464 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.850672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.856573 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.861398 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa"} Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.869204 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.870471 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.874391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875457 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875658 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.937645 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.977761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.977903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.981981 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.997253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.080017 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.096091 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.178398 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.196255 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.805672 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:35 crc kubenswrapper[4769]: W0122 14:00:35.817870 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e5e1134_cb08_4676_b40b_5e05af038ec7.slice/crio-aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb WatchSource:0}: Error finding container aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb: Status 404 returned error can't find the container with id aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.872809 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerStarted","Data":"aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb"} Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.878920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerStarted","Data":"b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566"} Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.906180 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jmhxf" podStartSLOduration=2.095535902 podStartE2EDuration="7.906156633s" podCreationTimestamp="2026-01-22 14:00:28 +0000 UTC" firstStartedPulling="2026-01-22 14:00:29.658120649 +0000 UTC m=+1009.069230578" lastFinishedPulling="2026-01-22 14:00:35.46874138 +0000 UTC m=+1014.879851309" observedRunningTime="2026-01-22 14:00:35.899849447 +0000 UTC m=+1015.310959386" watchObservedRunningTime="2026-01-22 14:00:35.906156633 +0000 UTC m=+1015.317266562" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.922777 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:35 crc kubenswrapper[4769]: W0122 14:00:35.928027 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257149e5_e0f3_4721_9329_6c119ce91192.slice/crio-dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868 WatchSource:0}: Error finding container dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868: Status 404 returned error can't find the container with id dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868 Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.988208 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.029045 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.101830 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.102167 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.103130 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.103205 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:44.103179226 +0000 UTC m=+1023.514289155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.884003 4769 generic.go:334] "Generic (PLEG): container finished" podID="257149e5-e0f3-4721-9329-6c119ce91192" containerID="c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.885760 4769 generic.go:334] "Generic (PLEG): container finished" podID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerID="41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892217 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerDied","Data":"c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892276 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerStarted","Data":"dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892293 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerDied","Data":"41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892353 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerStarted","Data":"0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892427 4769 generic.go:334] "Generic (PLEG): container finished" podID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerID="97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892510 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerDied","Data":"97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.900570 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"eca21d7f6c008a3ab3bd6cd8c6674138b1a4d1736d28bbd57680bff23218d7c6"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.903942 4769 generic.go:334] "Generic (PLEG): container finished" podID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerID="76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.904009 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerDied","Data":"76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.904031 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerStarted","Data":"5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4"} Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.917769 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"0568f0a3041dc122b247608db4fda9697a3ce9446474bc9931c7396300943a5b"} Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.918070 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.947407 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.239277149 podStartE2EDuration="6.947374781s" podCreationTimestamp="2026-01-22 14:00:31 +0000 UTC" firstStartedPulling="2026-01-22 14:00:33.906774201 +0000 UTC m=+1013.317884130" lastFinishedPulling="2026-01-22 14:00:36.614871833 +0000 UTC m=+1016.025981762" observedRunningTime="2026-01-22 14:00:37.935513539 +0000 UTC m=+1017.346623478" watchObservedRunningTime="2026-01-22 14:00:37.947374781 +0000 UTC m=+1017.358484710" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.354934 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443114 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"257149e5-e0f3-4721-9329-6c119ce91192\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443171 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"257149e5-e0f3-4721-9329-6c119ce91192\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443697 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "257149e5-e0f3-4721-9329-6c119ce91192" (UID: "257149e5-e0f3-4721-9329-6c119ce91192"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.449771 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9" (OuterVolumeSpecName: "kube-api-access-dwkh9") pod "257149e5-e0f3-4721-9329-6c119ce91192" (UID: "257149e5-e0f3-4721-9329-6c119ce91192"). InnerVolumeSpecName "kube-api-access-dwkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.519261 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.529231 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.544724 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.544760 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.545419 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646086 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"46ca4e3b-a376-4f54-88c0-75d4a912d489\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646159 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"8e5e1134-cb08-4676-b40b-5e05af038ec7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646340 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646362 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"46ca4e3b-a376-4f54-88c0-75d4a912d489\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646430 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"8e5e1134-cb08-4676-b40b-5e05af038ec7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646455 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.647006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e5e1134-cb08-4676-b40b-5e05af038ec7" (UID: "8e5e1134-cb08-4676-b40b-5e05af038ec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.647047 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46ca4e3b-a376-4f54-88c0-75d4a912d489" (UID: "46ca4e3b-a376-4f54-88c0-75d4a912d489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.648887 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" (UID: "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.649735 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k" (OuterVolumeSpecName: "kube-api-access-gl85k") pod "46ca4e3b-a376-4f54-88c0-75d4a912d489" (UID: "46ca4e3b-a376-4f54-88c0-75d4a912d489"). InnerVolumeSpecName "kube-api-access-gl85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.650446 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2" (OuterVolumeSpecName: "kube-api-access-bjlc2") pod "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" (UID: "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387"). InnerVolumeSpecName "kube-api-access-bjlc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.651571 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm" (OuterVolumeSpecName: "kube-api-access-lf8bm") pod "8e5e1134-cb08-4676-b40b-5e05af038ec7" (UID: "8e5e1134-cb08-4676-b40b-5e05af038ec7"). InnerVolumeSpecName "kube-api-access-lf8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748177 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748214 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748225 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748236 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748250 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748263 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerDied","Data":"dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927726 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927734 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930350 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerDied","Data":"0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930500 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930664 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933097 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933096 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerDied","Data":"aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933441 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.934849 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.935237 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerDied","Data":"5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.935288 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4" Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.784760 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.835277 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.835849 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" containerID="cri-o://8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" gracePeriod=10 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.019955 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020375 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020392 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020427 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020436 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020457 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020465 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020486 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020493 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020707 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020728 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020741 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020762 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.021403 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.027704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.129878 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.131494 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.133232 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.139528 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.189531 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.189782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291587 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291680 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.292990 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.309286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.354336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.392828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.392921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.393647 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.420481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.548178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.670554 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: W0122 14:00:40.696928 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb909a789_674d_40ba_b332_700e27464966.slice/crio-0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944 WatchSource:0}: Error finding container 0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944: Status 404 returned error can't find the container with id 0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.962721 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.962977 4769 generic.go:334] "Generic (PLEG): container finished" podID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963633 4769 scope.go:117] "RemoveContainer" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.971873 4769 generic.go:334] "Generic (PLEG): container finished" podID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerID="02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.971947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.979594 4769 generic.go:334] "Generic (PLEG): container finished" podID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerID="cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.979681 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.997309 4769 scope.go:117] "RemoveContainer" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.998221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerStarted","Data":"0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944"} Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.043436 4769 scope.go:117] "RemoveContainer" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.044415 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": container with ID starting with 8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41 not found: ID does not exist" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.044453 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} err="failed to get container status \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": rpc error: code = NotFound desc = could not find container \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": container with ID starting with 8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41 not found: ID does not exist" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.044476 4769 scope.go:117] "RemoveContainer" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.045704 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": container with ID starting with ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895 not found: ID does not exist" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.045759 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895"} err="failed to get container status \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": rpc error: code = NotFound desc = could not find container \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": container with ID starting with ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895 not found: ID does not exist" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.065100 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dxwjl" podStartSLOduration=1.065081956 podStartE2EDuration="1.065081956s" podCreationTimestamp="2026-01-22 14:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:41.059531881 +0000 UTC m=+1020.470641820" watchObservedRunningTime="2026-01-22 14:00:41.065081956 +0000 UTC m=+1020.476191885" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.109984 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.110112 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.110865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.115445 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6" (OuterVolumeSpecName: "kube-api-access-rjtk6") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "kube-api-access-rjtk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.160257 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config" (OuterVolumeSpecName: "config") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.162359 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.185753 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.195223 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213866 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213900 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213916 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.766883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.767290 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="init" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767311 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="init" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.767329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767338 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767526 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.768219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.776248 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.779124 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.825694 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.826080 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.927639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.927740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.928846 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.949442 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.006221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.006581 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.008274 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.008550 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.009634 4769 generic.go:334] "Generic (PLEG): container finished" podID="b909a789-674d-40ba-b332-700e27464966" containerID="fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d" exitCode=0 Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.009676 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerDied","Data":"fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.010979 4769 generic.go:334] "Generic (PLEG): container finished" podID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerID="8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b" exitCode=0 Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011050 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerDied","Data":"8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011072 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011086 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerStarted","Data":"935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.037413 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.818238729 podStartE2EDuration="52.037392355s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:55.746910833 +0000 UTC m=+975.158020772" lastFinishedPulling="2026-01-22 14:00:06.966064469 +0000 UTC m=+986.377174398" observedRunningTime="2026-01-22 14:00:42.037275252 +0000 UTC m=+1021.448385201" watchObservedRunningTime="2026-01-22 14:00:42.037392355 +0000 UTC m=+1021.448502284" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.057309 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.443146339 podStartE2EDuration="52.057285376s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.356158093 +0000 UTC m=+984.767268032" lastFinishedPulling="2026-01-22 14:00:06.97029713 +0000 UTC m=+986.381407069" observedRunningTime="2026-01-22 14:00:42.056714292 +0000 UTC m=+1021.467824251" watchObservedRunningTime="2026-01-22 14:00:42.057285376 +0000 UTC m=+1021.468395305" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.106684 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.115683 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.129191 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.502620 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.917619 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" path="/var/lib/kubelet/pods/b51a7d68-4414-4157-ab31-b5ee67a26b87/volumes" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.024209 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerStarted","Data":"ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04"} Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.024249 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerStarted","Data":"828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99"} Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.056097 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wfphv" podStartSLOduration=2.056071478 podStartE2EDuration="2.056071478s" podCreationTimestamp="2026-01-22 14:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:43.047389491 +0000 UTC m=+1022.458499430" watchObservedRunningTime="2026-01-22 14:00:43.056071478 +0000 UTC m=+1022.467181407" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.820744 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.826874 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"b909a789-674d-40ba-b332-700e27464966\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004918 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"73fd3df5-6e83-4893-9368-66c1ba35155a\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004992 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"b909a789-674d-40ba-b332-700e27464966\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.005068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"73fd3df5-6e83-4893-9368-66c1ba35155a\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.005698 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73fd3df5-6e83-4893-9368-66c1ba35155a" (UID: "73fd3df5-6e83-4893-9368-66c1ba35155a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.006121 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.006178 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b909a789-674d-40ba-b332-700e27464966" (UID: "b909a789-674d-40ba-b332-700e27464966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.010517 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75" (OuterVolumeSpecName: "kube-api-access-n5c75") pod "73fd3df5-6e83-4893-9368-66c1ba35155a" (UID: "73fd3df5-6e83-4893-9368-66c1ba35155a"). InnerVolumeSpecName "kube-api-access-n5c75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.010893 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb" (OuterVolumeSpecName: "kube-api-access-th6kb") pod "b909a789-674d-40ba-b332-700e27464966" (UID: "b909a789-674d-40ba-b332-700e27464966"). InnerVolumeSpecName "kube-api-access-th6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034101 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerDied","Data":"0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034145 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034117 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerDied","Data":"935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036331 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036384 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.047304 4769 generic.go:334] "Generic (PLEG): container finished" podID="4195c73b-d10a-4b39-ad10-1da9502af686" containerID="ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04" exitCode=0 Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.047343 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerDied","Data":"ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107733 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107751 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107766 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.112117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.196125 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.753651 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:44 crc kubenswrapper[4769]: W0122 14:00:44.760842 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce65dba3_22b9_482f_b3da_2f4705468ea4.slice/crio-d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7 WatchSource:0}: Error finding container d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7: Status 404 returned error can't find the container with id d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7 Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.057762 4769 generic.go:334] "Generic (PLEG): container finished" podID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerID="b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566" exitCode=0 Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.057828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerDied","Data":"b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566"} Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.058967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7"} Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.253027 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ljbrk" podUID="db7ce269-d7ec-4db1-aab3-b22da5d56c6e" containerName="ovn-controller" probeResult="failure" output=< Jan 22 14:00:45 crc kubenswrapper[4769]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 14:00:45 crc kubenswrapper[4769]: > Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.257582 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335093 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:45 crc kubenswrapper[4769]: E0122 14:00:45.335422 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335438 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: E0122 14:00:45.335447 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335453 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335619 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335663 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.336292 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.338186 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.338268 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhk4" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.362996 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.418284 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.528665 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"4195c73b-d10a-4b39-ad10-1da9502af686\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.528772 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"4195c73b-d10a-4b39-ad10-1da9502af686\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529067 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529111 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529578 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529599 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4195c73b-d10a-4b39-ad10-1da9502af686" (UID: "4195c73b-d10a-4b39-ad10-1da9502af686"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529777 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529981 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.543005 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4" (OuterVolumeSpecName: "kube-api-access-g7ks4") pod "4195c73b-d10a-4b39-ad10-1da9502af686" (UID: "4195c73b-d10a-4b39-ad10-1da9502af686"). InnerVolumeSpecName "kube-api-access-g7ks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631372 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631806 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.632063 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.632350 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.636398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.638655 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.648166 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.654221 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.716076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.853348 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072437 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072493 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerDied","Data":"828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99"} Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072527 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.374062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.386015 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547451 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547728 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547772 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547843 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547872 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547977 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.548823 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.548932 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.553862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx" (OuterVolumeSpecName: "kube-api-access-tzwjx") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "kube-api-access-tzwjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.559484 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.568706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts" (OuterVolumeSpecName: "scripts") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.570517 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.578898 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.581443 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: W0122 14:00:46.610155 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4b4ca8a_8b9e_48d2_9208_fecb2bc9a299.slice/crio-a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8 WatchSource:0}: Error finding container a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8: Status 404 returned error can't find the container with id a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8 Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.649904 4769 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650396 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650431 4769 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650442 4769 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650451 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650461 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650469 4769 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.094508 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"462869d558d9f49e29a5a34141e78fcd0c96ffa63f8f76014c23c4c843c4850e"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.097032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerStarted","Data":"a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerDied","Data":"895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098218 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0" Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098269 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.202783 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.208403 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.893378 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" path="/var/lib/kubelet/pods/4195c73b-d10a-4b39-ad10-1da9502af686/volumes" Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.117959 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"cbed28ee7193a910f0117dd368d60a4c91d6b9d9d61d79dc2ecdcbfffee73505"} Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.118013 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"a991deb8e0631fde737a00e78149f3287c197258880e7a783e51be05b94e29ed"} Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.118025 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"080f07f1f2a1ecfffedaf2446036b625e39e4b70c7f389faf7370852330f240e"} Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.261106 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ljbrk" podUID="db7ce269-d7ec-4db1-aab3-b22da5d56c6e" containerName="ovn-controller" probeResult="failure" output=< Jan 22 14:00:50 crc kubenswrapper[4769]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 14:00:50 crc kubenswrapper[4769]: > Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.275885 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.466664 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:50 crc kubenswrapper[4769]: E0122 14:00:50.467054 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467074 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: E0122 14:00:50.467089 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467097 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467242 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467262 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.474435 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.478140 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.482218 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.609832 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610244 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610501 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610570 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712407 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712462 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712500 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712550 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712593 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712632 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.713409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.714895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.743915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.791558 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:51 crc kubenswrapper[4769]: I0122 14:00:51.377696 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:51 crc kubenswrapper[4769]: I0122 14:00:51.683550 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.066952 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.226057 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"e10c907382d8831a559c4c4d89a46a697c4000033721f39c48f631e0c0364cec"} Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.233565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerStarted","Data":"b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b"} Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.221525 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.222934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.229417 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.231185 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.360918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.361164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.462863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.462984 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.463759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.484207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.576868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.108906 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:54 crc kubenswrapper[4769]: W0122 14:00:54.121189 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4521e7ce_1245_4a18_9179_83a2b288e227.slice/crio-3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1 WatchSource:0}: Error finding container 3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1: Status 404 returned error can't find the container with id 3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1 Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.257334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"9a1f50309aeee1040bdd92b3e5ea00d03944cbae5a44744e87efcb265d3a7b37"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.257651 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"3737f1d391e2fecf42f301060c8c5c1da63f3f2e5806e23f7d670983f57e9dec"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.258962 4769 generic.go:334] "Generic (PLEG): container finished" podID="21361871-15c6-44f4-ac22-d7765d9633a0" containerID="1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b" exitCode=0 Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.258998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerDied","Data":"1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.260393 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerStarted","Data":"3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1"} Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.262565 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.270201 4769 generic.go:334] "Generic (PLEG): container finished" podID="4521e7ce-1245-4a18-9179-83a2b288e227" containerID="09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f" exitCode=0 Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.270256 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerDied","Data":"09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f"} Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.276528 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"b88db4863753c30af904596d458016893c1fd2790bc4eea038c5fecef9c97bd9"} Jan 22 14:01:01 crc kubenswrapper[4769]: I0122 14:01:01.684276 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.006423 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.011580 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055398 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: E0122 14:01:02.055866 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055883 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: E0122 14:01:02.055905 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055911 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056076 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056107 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056602 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.073391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.076916 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.122139 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.125285 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.128565 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.137147 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173099 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173161 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173214 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"4521e7ce-1245-4a18-9179-83a2b288e227\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173359 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173408 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173472 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"4521e7ce-1245-4a18-9179-83a2b288e227\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173513 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173843 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run" (OuterVolumeSpecName: "var-run") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174092 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174228 4769 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174468 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174494 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174699 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4521e7ce-1245-4a18-9179-83a2b288e227" (UID: "4521e7ce-1245-4a18-9179-83a2b288e227"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts" (OuterVolumeSpecName: "scripts") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.175938 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.185024 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz" (OuterVolumeSpecName: "kube-api-access-66zqz") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "kube-api-access-66zqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.202247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj" (OuterVolumeSpecName: "kube-api-access-8qhdj") pod "4521e7ce-1245-4a18-9179-83a2b288e227" (UID: "4521e7ce-1245-4a18-9179-83a2b288e227"). InnerVolumeSpecName "kube-api-access-8qhdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.215244 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.217025 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.226081 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.228614 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.231169 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.241831 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.253614 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.277631 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278063 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278104 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278139 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278315 4769 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278327 4769 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278336 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278345 4769 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278354 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278363 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278372 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.279458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.311267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerDied","Data":"b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b"} Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342155 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342208 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351222 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerDied","Data":"3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1"} Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351262 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351863 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.374095 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380004 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380097 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380178 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380213 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.381185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.381762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.402511 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.403116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.430072 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.431123 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.435692 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.435931 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.436053 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.436416 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.439260 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.444752 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481126 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481225 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481278 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.482201 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.506875 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.507653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.508005 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.510140 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.521778 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585445 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585595 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585718 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.591087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.591653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.605187 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.617097 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.624870 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.626309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.632628 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.637935 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686800 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686888 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686972 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.687037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.688093 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.711189 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.757594 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.789027 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.789158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.790250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.807741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.832351 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.949418 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.969089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.127632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.169534 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.195353 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.222745 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.266758 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.276640 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:03 crc kubenswrapper[4769]: W0122 14:01:03.311594 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb68cb3e_c079_4e87_ae9b_be93a2b8b80e.slice/crio-7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8 WatchSource:0}: Error finding container 7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8: Status 404 returned error can't find the container with id 7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8 Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.373617 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.399126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerStarted","Data":"d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.405199 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerStarted","Data":"1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.411764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerStarted","Data":"7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.424425 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerStarted","Data":"53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.428533 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerStarted","Data":"2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.496118 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.438741 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerStarted","Data":"52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.441245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerStarted","Data":"92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.442907 4769 generic.go:334] "Generic (PLEG): container finished" podID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerID="a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4" exitCode=0 Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.442998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerDied","Data":"a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.446919 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerStarted","Data":"afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.450561 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerStarted","Data":"61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.456969 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerStarted","Data":"21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.464175 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-5nx2t" podStartSLOduration=2.464156329 podStartE2EDuration="2.464156329s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.457653293 +0000 UTC m=+1043.868763212" watchObservedRunningTime="2026-01-22 14:01:04.464156329 +0000 UTC m=+1043.875266258" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.479666 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-t9sxw" podStartSLOduration=3.879451478 podStartE2EDuration="19.479630158s" podCreationTimestamp="2026-01-22 14:00:45 +0000 UTC" firstStartedPulling="2026-01-22 14:00:46.612382847 +0000 UTC m=+1026.023492786" lastFinishedPulling="2026-01-22 14:01:02.212561537 +0000 UTC m=+1041.623671466" observedRunningTime="2026-01-22 14:01:04.476039551 +0000 UTC m=+1043.887149490" watchObservedRunningTime="2026-01-22 14:01:04.479630158 +0000 UTC m=+1043.890740087" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.594228 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8bb3-account-create-update-x6jhs" podStartSLOduration=2.594208715 podStartE2EDuration="2.594208715s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.566082052 +0000 UTC m=+1043.977192001" watchObservedRunningTime="2026-01-22 14:01:04.594208715 +0000 UTC m=+1044.005318644" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.595051 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8372-account-create-update-lq4fn" podStartSLOduration=2.595043568 podStartE2EDuration="2.595043568s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.594983637 +0000 UTC m=+1044.006093566" watchObservedRunningTime="2026-01-22 14:01:04.595043568 +0000 UTC m=+1044.006153497" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.939869 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" path="/var/lib/kubelet/pods/21361871-15c6-44f4-ac22-d7765d9633a0/volumes" Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.481290 4769 generic.go:334] "Generic (PLEG): container finished" podID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerID="afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.482745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerDied","Data":"afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.497715 4769 generic.go:334] "Generic (PLEG): container finished" podID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerID="21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.497869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerDied","Data":"21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.506812 4769 generic.go:334] "Generic (PLEG): container finished" podID="3d72603e-a10a-4490-8298-67db64d087fc" containerID="52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.506904 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerDied","Data":"52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514431 4769 generic.go:334] "Generic (PLEG): container finished" podID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerID="77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerDied","Data":"77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514535 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerStarted","Data":"922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.528056 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"897d1b1eae0db3bae6b6a31c80b43bd0fb6f29d261414e341a479ba8ba030026"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.528103 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"8930b3c902daeb8b565ad6483914b78f7d642a6ddf06936626825b35c7fa4dff"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.529579 4769 generic.go:334] "Generic (PLEG): container finished" podID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerID="9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.530427 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerDied","Data":"9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:05.999704 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.066768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.067017 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.068782 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" (UID: "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.076738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j" (OuterVolumeSpecName: "kube-api-access-ldw6j") pod "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" (UID: "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0"). InnerVolumeSpecName "kube-api-access-ldw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.168463 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.168498 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.540467 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"a768ce4cac51de83b2e0e35b63af262ccfcd78325665e2b1c6145183f8c4b7fc"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543897 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerDied","Data":"53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543949 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543964 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.998448 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.082763 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"3d72603e-a10a-4490-8298-67db64d087fc\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.083115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"3d72603e-a10a-4490-8298-67db64d087fc\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.083724 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d72603e-a10a-4490-8298-67db64d087fc" (UID: "3d72603e-a10a-4490-8298-67db64d087fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.088035 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24" (OuterVolumeSpecName: "kube-api-access-bdv24") pod "3d72603e-a10a-4490-8298-67db64d087fc" (UID: "3d72603e-a10a-4490-8298-67db64d087fc"). InnerVolumeSpecName "kube-api-access-bdv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.177815 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.185440 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.185477 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.186911 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286808 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287399 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" (UID: "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287466 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" (UID: "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287850 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287871 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.294075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg" (OuterVolumeSpecName: "kube-api-access-45ccg") pod "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" (UID: "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9"). InnerVolumeSpecName "kube-api-access-45ccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.304056 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb" (OuterVolumeSpecName: "kube-api-access-f2wnb") pod "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" (UID: "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e"). InnerVolumeSpecName "kube-api-access-f2wnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.389561 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.389602 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerDied","Data":"7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556867 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556940 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570244 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570227 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerDied","Data":"1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570682 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerDied","Data":"922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574327 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574338 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.014831 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.042730 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.138934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.138997 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139095 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139601 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51e2f7fd-cd2e-4a84-b62a-27915d32469c" (UID: "51e2f7fd-cd2e-4a84-b62a-27915d32469c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec90402f-c994-4710-b82f-5c8cc3f12fdf" (UID: "ec90402f-c994-4710-b82f-5c8cc3f12fdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.143328 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg" (OuterVolumeSpecName: "kube-api-access-hs9rg") pod "ec90402f-c994-4710-b82f-5c8cc3f12fdf" (UID: "ec90402f-c994-4710-b82f-5c8cc3f12fdf"). InnerVolumeSpecName "kube-api-access-hs9rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.144845 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx" (OuterVolumeSpecName: "kube-api-access-cl5gx") pod "51e2f7fd-cd2e-4a84-b62a-27915d32469c" (UID: "51e2f7fd-cd2e-4a84-b62a-27915d32469c"). InnerVolumeSpecName "kube-api-access-cl5gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240935 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240974 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240984 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240992 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.599806 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"71fb5bc6f9e9c2c599e91ba4cb6564a5a859a950c3b343c6218824a3ce16549a"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.601842 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.601842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerDied","Data":"2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.602106 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.604821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerDied","Data":"d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.604986 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.605065 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.611365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerStarted","Data":"3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.632407 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r7c9w" podStartSLOduration=2.188598078 podStartE2EDuration="8.632391781s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="2026-01-22 14:01:03.458711386 +0000 UTC m=+1042.869821325" lastFinishedPulling="2026-01-22 14:01:09.902505099 +0000 UTC m=+1049.313615028" observedRunningTime="2026-01-22 14:01:10.624918907 +0000 UTC m=+1050.036028836" watchObservedRunningTime="2026-01-22 14:01:10.632391781 +0000 UTC m=+1050.043501710" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"b8093b148307c549128a45be1e93e4639e7ec527913f9314337ea0c5f3334a00"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626409 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"8e03e577b39ab439637acb4ad818d5d0a4150b3aa00c5025409ee33d5361ebe5"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626429 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"1e6a6bed31e512a165b094eda0f928465082927b9b175261d080013cdbf2e8bc"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.665746 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.878220274 podStartE2EDuration="44.66572537s" podCreationTimestamp="2026-01-22 14:00:27 +0000 UTC" firstStartedPulling="2026-01-22 14:00:44.765178014 +0000 UTC m=+1024.176287943" lastFinishedPulling="2026-01-22 14:01:04.55268311 +0000 UTC m=+1043.963793039" observedRunningTime="2026-01-22 14:01:11.65950431 +0000 UTC m=+1051.070614249" watchObservedRunningTime="2026-01-22 14:01:11.66572537 +0000 UTC m=+1051.076835299" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.951910 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952591 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952611 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952628 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952634 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952645 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952651 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952664 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952670 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952690 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952695 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952704 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952711 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952867 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952879 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952889 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952898 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952908 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952917 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.953721 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.956310 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.965959 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081549 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081881 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184013 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184225 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184327 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185349 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185491 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.211905 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.272759 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.742145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:12 crc kubenswrapper[4769]: W0122 14:01:12.754233 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83064db_7f62_4af5_9747_89e9054b3a16.slice/crio-dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406 WatchSource:0}: Error finding container dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406: Status 404 returned error can't find the container with id dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406 Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644355 4769 generic.go:334] "Generic (PLEG): container finished" podID="d83064db-7f62-4af5-9747-89e9054b3a16" containerID="9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58" exitCode=0 Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58"} Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerStarted","Data":"dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.657214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerStarted","Data":"c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.657687 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.659780 4769 generic.go:334] "Generic (PLEG): container finished" podID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerID="3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd" exitCode=0 Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.659879 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerDied","Data":"3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.688351 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" podStartSLOduration=3.688317317 podStartE2EDuration="3.688317317s" podCreationTimestamp="2026-01-22 14:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:14.674821261 +0000 UTC m=+1054.085931190" watchObservedRunningTime="2026-01-22 14:01:14.688317317 +0000 UTC m=+1054.099427316" Jan 22 14:01:15 crc kubenswrapper[4769]: E0122 14:01:15.728447 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4b4ca8a_8b9e_48d2_9208_fecb2bc9a299.slice/crio-61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4.scope\": RecentStats: unable to find data in memory cache]" Jan 22 14:01:15 crc kubenswrapper[4769]: I0122 14:01:15.980906 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049144 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.054746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t" (OuterVolumeSpecName: "kube-api-access-6ld4t") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "kube-api-access-6ld4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.073882 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.111900 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data" (OuterVolumeSpecName: "config-data") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151512 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151545 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151556 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.679462 4769 generic.go:334] "Generic (PLEG): container finished" podID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerID="61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4" exitCode=0 Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.679527 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerDied","Data":"61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4"} Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681283 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerDied","Data":"92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7"} Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681308 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681364 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.007315 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:17 crc kubenswrapper[4769]: E0122 14:01:17.012298 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.012564 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.012895 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.013657 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.019836 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020307 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020309 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020634 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020936 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.032863 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.033165 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" containerID="cri-o://c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" gracePeriod=10 Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.046986 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064886 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065064 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065105 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065144 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.141696 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.162935 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.166996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167082 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167159 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.171712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.172039 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.172174 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.173377 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.173842 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.237807 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.264936 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.272752 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304078 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304109 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304147 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304275 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304340 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.315120 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.317261 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.317525 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jpqbp" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.322429 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.322683 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.323580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.335640 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.340242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350204 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350328 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350491 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350682 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7p5j2" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.412865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413735 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413754 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413843 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413866 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413909 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413950 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413993 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.414016 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.414035 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422146 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422239 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.463000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.468883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.469985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474298 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474479 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474493 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m6vjl" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.490645 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.492725 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.502016 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.506642 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.511089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.516998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517077 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517245 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517300 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.518045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.518936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.521749 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.526998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.537721 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.541243 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.547484 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.571170 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.574844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.576883 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qkkxv" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.581004 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.586822 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.600210 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.615577 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619264 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619290 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619343 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619370 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619469 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619533 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.635176 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.636681 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.641263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.644860 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.667575 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.687945 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.694627 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dx89d" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.695331 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.696980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.703821 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734805 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734846 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734886 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734904 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734919 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734966 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734993 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735022 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735037 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735073 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735112 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735156 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735173 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735191 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735216 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735254 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735273 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.738448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739260 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739934 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.741304 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.742618 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.746110 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.749641 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.752585 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.764637 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.764945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.767701 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.772820 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.773857 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.780003 4769 generic.go:334] "Generic (PLEG): container finished" podID="d83064db-7f62-4af5-9747-89e9054b3a16" containerID="c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" exitCode=0 Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.780391 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4"} Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.801380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.818655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.819148 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.841073 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.842520 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844393 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844500 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844578 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844691 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844839 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859171 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859693 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.863925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.864288 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.865178 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.868634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.872742 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.873128 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.873238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.875241 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.906941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.907330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.931914 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.953680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.969922 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.969983 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970010 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970156 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.012307 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.042651 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071860 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071952 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071974 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.072039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.072057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.073161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.073712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.074528 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.075265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.076587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.096708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.126981 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.208289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.543863 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:18 crc kubenswrapper[4769]: W0122 14:01:18.558023 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7a7218_57a6_4091_9bd0_568fda3122fd.slice/crio-4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e WatchSource:0}: Error finding container 4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e: Status 404 returned error can't find the container with id 4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.588822 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.592343 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.684138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685112 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685249 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685320 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685455 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685557 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.696274 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.705212 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk" (OuterVolumeSpecName: "kube-api-access-bf9rk") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "kube-api-access-bf9rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.707642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.732338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv" (OuterVolumeSpecName: "kube-api-access-2g8wv") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "kube-api-access-2g8wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.747303 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787409 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787432 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787444 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.788963 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.813149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.814045 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerStarted","Data":"df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.814083 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerStarted","Data":"6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815353 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerDied","Data":"a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815377 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815381 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.818456 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"21b21bef7c85b718cfdbb016fe626efbd1ab870c4b734875a383413b1b9ca2cc"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822282 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822313 4769 scope.go:117] "RemoveContainer" containerID="c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822333 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822578 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data" (OuterVolumeSpecName: "config-data") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.827273 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89bdb59-vr94p" event={"ID":"5c4b43cf-c766-4b56-a016-a3f2d26656a1","Type":"ContainerStarted","Data":"1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.831961 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerStarted","Data":"4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.838599 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wdqr9" podStartSLOduration=2.838575401 podStartE2EDuration="2.838575401s" podCreationTimestamp="2026-01-22 14:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:18.831563261 +0000 UTC m=+1058.242673190" watchObservedRunningTime="2026-01-22 14:01:18.838575401 +0000 UTC m=+1058.249685330" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.880712 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895545 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895582 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895596 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.899951 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.916401 4769 scope.go:117] "RemoveContainer" containerID="9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58" Jan 22 14:01:18 crc kubenswrapper[4769]: W0122 14:01:18.937217 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb8819f_512d_43d8_a59e_1ba8e7e1fb06.slice/crio-81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6 WatchSource:0}: Error finding container 81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6: Status 404 returned error can't find the container with id 81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6 Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.937834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.938664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.938703 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.941462 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config" (OuterVolumeSpecName: "config") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.966220 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.980697 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.999038 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001894 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001927 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001939 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.055211 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:19 crc kubenswrapper[4769]: W0122 14:01:19.073163 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf79e78c3_4c98_41e2_be1e_19d794ed1c17.slice/crio-d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d WatchSource:0}: Error finding container d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d: Status 404 returned error can't find the container with id d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.098354 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.192368 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.289701 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290123 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290142 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290163 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290168 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290183 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="init" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290189 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="init" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290349 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290366 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.291209 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.307091 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.392097 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.417809 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431469 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431569 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431598 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431640 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431757 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.532921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.532977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533082 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533942 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.534427 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.534524 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.549224 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.594592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.629653 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.631858 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.660487 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhk4" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.661020 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.676263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.678914 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.691854 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.699099 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.700745 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.703507 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775580 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775641 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775710 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775731 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775746 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775764 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.853810 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878706 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878808 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878838 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878948 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.879010 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.879045 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.882069 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.883558 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.885244 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.889700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899237 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899405 4769 generic.go:334] "Generic (PLEG): container finished" podID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerID="eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871" exitCode=0 Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899601 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899627 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerDied","Data":"eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerStarted","Data":"dd163787184b799a47be1dc4a764a72ea38c1e55f5f24860611abd6e7a863477"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899947 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.901538 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.907626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.909661 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.929810 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.930365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerStarted","Data":"81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.955396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerStarted","Data":"3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.955443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerStarted","Data":"f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.968083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.975380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerStarted","Data":"d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.983954 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984162 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984307 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984480 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984627 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.985425 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999146 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999486 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999779 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.004974 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerStarted","Data":"db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.014185 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c66f6f78c-g92qm" event={"ID":"f79e78c3-4c98-41e2-be1e-19d794ed1c17","Type":"ContainerStarted","Data":"d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.030447 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerID="5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999" exitCode=0 Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.031352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerDied","Data":"5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.050402 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.052276 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rqjpw" podStartSLOduration=3.05225483 podStartE2EDuration="3.05225483s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:20.031507788 +0000 UTC m=+1059.442617717" watchObservedRunningTime="2026-01-22 14:01:20.05225483 +0000 UTC m=+1059.463364759" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.055515 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.104781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105112 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105268 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105320 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.106123 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.106737 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.107958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.109930 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.116292 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.136320 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.138374 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.168729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.188504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.218414 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.533393 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.621866 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622051 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622122 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622274 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622344 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622390 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.637259 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp" (OuterVolumeSpecName: "kube-api-access-wf4pp") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "kube-api-access-wf4pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.655477 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.689464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.702684 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.710471 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config" (OuterVolumeSpecName: "config") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.711305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726034 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726321 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726331 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726344 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726354 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726363 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.751731 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.820712 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827447 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827910 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827969 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.828055 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.828102 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.851640 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.852540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr" (OuterVolumeSpecName: "kube-api-access-7cpcr") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "kube-api-access-7cpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.862305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.870381 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.870553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.875098 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config" (OuterVolumeSpecName: "config") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.924579 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" path="/var/lib/kubelet/pods/d83064db-7f62-4af5-9747-89e9054b3a16/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.939928 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940019 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940044 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940066 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940081 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940093 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052591 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerDied","Data":"4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052645 4769 scope.go:117] "RemoveContainer" containerID="5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.069932 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.070111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerDied","Data":"dd163787184b799a47be1dc4a764a72ea38c1e55f5f24860611abd6e7a863477"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.074416 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"dbba61067789f8e4b68dedf1066a578d68118546758df6cfdb39ad7d7ae20588"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.118982 4769 scope.go:117] "RemoveContainer" containerID="eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.179757 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.186702 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.233856 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.239648 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:22.899189 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" path="/var/lib/kubelet/pods/0a7a7218-57a6-4091-9bd0-568fda3122fd/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:22.900120 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" path="/var/lib/kubelet/pods/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:23.099944 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:29.772626 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:29.847342 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.104639 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.130926 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:30.131503 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131525 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:30.131575 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131581 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131838 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131875 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.133075 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.135180 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.194138 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.222382 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.245903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.245963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246083 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246129 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.253979 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.255552 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.270944 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348513 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348546 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348752 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348821 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348887 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348923 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348965 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.349019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.349046 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.350308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.350869 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.351026 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.355627 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.360316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.360659 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.366440 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.450700 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451282 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451376 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451736 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.452342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.453091 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.459126 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.460238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.461761 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.461988 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.464739 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.508647 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" podUID="ed1198a5-a7fa-4ab4-9656-8e9700deec37" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.580517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:31.193372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a"} Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.029668 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.030192 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g78xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-bjdj8_openstack(a0e92228-1a9b-49fc-9dfd-0493f70f5ee8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.031412 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-bjdj8" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.226471 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" containerID="cri-o://f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" gracePeriod=30 Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.226851 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" containerID="cri-o://c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" gracePeriod=30 Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.228553 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-bjdj8" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.250688 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.250670384 podStartE2EDuration="14.250670384s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:33.249385599 +0000 UTC m=+1072.660495528" watchObservedRunningTime="2026-01-22 14:01:33.250670384 +0000 UTC m=+1072.661780313" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.364742 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.373182 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.596673 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238122 4769 generic.go:334] "Generic (PLEG): container finished" podID="84850145-89ac-4660-8a13-6abde9509589" containerID="c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" exitCode=0 Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238149 4769 generic.go:334] "Generic (PLEG): container finished" podID="84850145-89ac-4660-8a13-6abde9509589" containerID="f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" exitCode=143 Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238166 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a"} Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238190 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451"} Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.194166 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.194926 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b5h5h68bh59bh5d6h5f5h57bhfch58ch546h54ch5dhd6h67dh84h596h84h565h677h597h649h54bh69h68fh7fhcbh5c6h685hdfh656h64h55q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb9wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-89bdb59-vr94p_openstack(5c4b43cf-c766-4b56-a016-a3f2d26656a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.199011 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-89bdb59-vr94p" podUID="5c4b43cf-c766-4b56-a016-a3f2d26656a1" Jan 22 14:01:46 crc kubenswrapper[4769]: I0122 14:01:46.331042 4769 generic.go:334] "Generic (PLEG): container finished" podID="77ac558e-a319-4c27-9869-fee6f85736e5" containerID="df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44" exitCode=0 Jan 22 14:01:46 crc kubenswrapper[4769]: I0122 14:01:46.331556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerDied","Data":"df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44"} Jan 22 14:01:50 crc kubenswrapper[4769]: I0122 14:01:50.051695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:01:50 crc kubenswrapper[4769]: I0122 14:01:50.052406 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.424654 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerStarted","Data":"de08ee3bddd1437f1405dc62dcd35ee86837e2196876742c81be83ac8aaa6642"} Jan 22 14:01:55 crc kubenswrapper[4769]: W0122 14:01:55.455734 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20251361_dc9f_403b_bffa_2a52a61e1bf4.slice/crio-64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f WatchSource:0}: Error finding container 64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f: Status 404 returned error can't find the container with id 64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.554992 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.563280 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628500 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628542 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628597 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628662 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628679 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628752 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629469 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629506 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs" (OuterVolumeSpecName: "logs") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts" (OuterVolumeSpecName: "scripts") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630233 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data" (OuterVolumeSpecName: "config-data") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630899 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630918 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630929 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts" (OuterVolumeSpecName: "scripts") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633567 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633646 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb" (OuterVolumeSpecName: "kube-api-access-f9vbb") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "kube-api-access-f9vbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633989 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.636006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc" (OuterVolumeSpecName: "kube-api-access-jb9wc") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "kube-api-access-jb9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.638377 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.652241 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.656773 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data" (OuterVolumeSpecName: "config-data") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733023 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733314 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733327 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733335 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733344 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733351 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733359 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733370 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.880395 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.880607 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574h66dh588h5b8h655h54fh98h64bh555h64fh545h548hd6h676hd9h5ffh5f4h6fh656h56fh69h85h654h599h58bh8fh86h5ffhb8h7fh56bhffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfrwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5c66f6f78c-g92qm_openstack(f79e78c3-4c98-41e2-be1e-19d794ed1c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.884299 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c66f6f78c-g92qm" podUID="f79e78c3-4c98-41e2-be1e-19d794ed1c17" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.445264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89bdb59-vr94p" event={"ID":"5c4b43cf-c766-4b56-a016-a3f2d26656a1","Type":"ContainerDied","Data":"1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.445291 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.453504 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458742 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458752 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerDied","Data":"6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458965 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.460277 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"054e89b41fe504baa24efa6fdc5ef87502ed22b3b42e8052873a0df4c426e7ed"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.527936 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.534185 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:56 crc kubenswrapper[4769]: E0122 14:01:56.641769 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4b43cf_c766_4b56_a016_a3f2d26656a1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ac558e_a319_4c27_9869_fee6f85736e5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ac558e_a319_4c27_9869_fee6f85736e5.slice/crio-6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4b43cf_c766_4b56_a016_a3f2d26656a1.slice/crio-1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d\": RecentStats: unable to find data in memory cache]" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.682429 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.696707 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764343 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:56 crc kubenswrapper[4769]: E0122 14:01:56.764739 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764762 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764987 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.765605 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768068 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768068 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768397 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.769557 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.769575 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.778854 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.861770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862020 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862166 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862240 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862261 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.894968 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4b43cf-c766-4b56-a016-a3f2d26656a1" path="/var/lib/kubelet/pods/5c4b43cf-c766-4b56-a016-a3f2d26656a1/volumes" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.895409 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" path="/var/lib/kubelet/pods/77ac558e-a319-4c27-9869-fee6f85736e5/volumes" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966651 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966867 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966896 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.972989 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.973182 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.973866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.974355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.974660 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.991768 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:57 crc kubenswrapper[4769]: I0122 14:01:57.088251 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.162677 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.162934 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-l4hnw_openstack(3eb8819f-512d-43d8-a59e-1ba8e7e1fb06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.164350 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-l4hnw" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.434703 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.435267 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d9hbh6hffhc9h58bh668hc4hddhfdh5cbh677h567hf5h688h544h5f7hc5h65bh54fhdfh58fhf8h8bhcbh595h57ch56ch66hf9h55bh55dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnkhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7464458e-c450-4b87-80d6-30abeb62e9d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.473025 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-l4hnw" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.853582 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.853792 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbsw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zzjpd_openstack(a7f766e1-262c-4861-a117-2454631e284f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.855009 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zzjpd" podUID="a7f766e1-262c-4861-a117-2454631e284f" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.010437 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.011891 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087415 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087512 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087670 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087702 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087774 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087916 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087944 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.088600 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs" (OuterVolumeSpecName: "logs") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts" (OuterVolumeSpecName: "scripts") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089240 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs" (OuterVolumeSpecName: "logs") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data" (OuterVolumeSpecName: "config-data") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.090242 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.092681 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts" (OuterVolumeSpecName: "scripts") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.093481 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn" (OuterVolumeSpecName: "kube-api-access-wfrwn") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "kube-api-access-wfrwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094450 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt" (OuterVolumeSpecName: "kube-api-access-vvsdt") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "kube-api-access-vvsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.172425 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189881 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189919 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189934 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189949 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189962 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189994 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190009 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190022 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190033 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190046 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190059 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.211573 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data" (OuterVolumeSpecName: "config-data") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.218087 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.267110 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.291841 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.291873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.380252 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:58 crc kubenswrapper[4769]: W0122 14:01:58.385489 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6a04bb_fa49_41f8_b75b_9c27873f8a1f.slice/crio-69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776 WatchSource:0}: Error finding container 69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776: Status 404 returned error can't find the container with id 69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776 Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.479497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.480438 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"dbba61067789f8e4b68dedf1066a578d68118546758df6cfdb39ad7d7ae20588"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481469 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481492 4769 scope.go:117] "RemoveContainer" containerID="c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.486033 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"a21b69f798a23fdcfdfb92adcc62b30839c1be6a1c5c04d00a869ead5ddc22a7"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.488621 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.489133 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c66f6f78c-g92qm" event={"ID":"f79e78c3-4c98-41e2-be1e-19d794ed1c17","Type":"ContainerDied","Data":"d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d"} Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.489697 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-zzjpd" podUID="a7f766e1-262c-4861-a117-2454631e284f" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.505066 4769 scope.go:117] "RemoveContainer" containerID="f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.534780 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.541735 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.574497 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.575161 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575182 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.575194 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575201 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575370 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575395 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.576346 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.579616 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.579648 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.594724 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.607702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608111 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.609453 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.609701 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.610030 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.651535 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.663852 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.716580 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.716660 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717207 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717859 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717909 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717963 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717994 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.718040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.718457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.720612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.721116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.721930 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.724812 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.725378 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.737700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.747071 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.895315 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.895605 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84850145-89ac-4660-8a13-6abde9509589" path="/var/lib/kubelet/pods/84850145-89ac-4660-8a13-6abde9509589/volumes" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.896404 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79e78c3-4c98-41e2-be1e-19d794ed1c17" path="/var/lib/kubelet/pods/f79e78c3-4c98-41e2-be1e-19d794ed1c17/volumes" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.497553 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"871961a2674139b5e212b19135fb06e41841ece36cd09ff61777241cbffbea44"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.498102 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"02baacda2925f01747731a4c29d0431e83e88dd74a623594d756d0f640e90a3d"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.499781 4769 generic.go:334] "Generic (PLEG): container finished" podID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" exitCode=0 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.500493 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.503401 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.504828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerStarted","Data":"4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.504862 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerStarted","Data":"01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511253 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511423 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b8d5fbf-mdp8d" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" containerID="cri-o://24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" gracePeriod=30 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511675 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b8d5fbf-mdp8d" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" containerID="cri-o://75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" gracePeriod=30 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.516952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.517007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.520315 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cc4c8d8bd-69kmb" podStartSLOduration=29.520294686 podStartE2EDuration="29.520294686s" podCreationTimestamp="2026-01-22 14:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.51789424 +0000 UTC m=+1098.929004179" watchObservedRunningTime="2026-01-22 14:01:59.520294686 +0000 UTC m=+1098.931404615" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.521745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerStarted","Data":"7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.578971 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nv6tp" podStartSLOduration=3.578945656 podStartE2EDuration="3.578945656s" podCreationTimestamp="2026-01-22 14:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.574338452 +0000 UTC m=+1098.985448391" watchObservedRunningTime="2026-01-22 14:01:59.578945656 +0000 UTC m=+1098.990055595" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.601760 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-88b8d5fbf-mdp8d" podStartSLOduration=38.000940544 podStartE2EDuration="40.601742374s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="2026-01-22 14:01:55.429960427 +0000 UTC m=+1094.841070356" lastFinishedPulling="2026-01-22 14:01:58.030762247 +0000 UTC m=+1097.441872186" observedRunningTime="2026-01-22 14:01:59.592106383 +0000 UTC m=+1099.003216312" watchObservedRunningTime="2026-01-22 14:01:59.601742374 +0000 UTC m=+1099.012852303" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.614386 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6464b9bcc6-tjgjv" podStartSLOduration=29.614363886 podStartE2EDuration="29.614363886s" podCreationTimestamp="2026-01-22 14:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.614097349 +0000 UTC m=+1099.025207278" watchObservedRunningTime="2026-01-22 14:01:59.614363886 +0000 UTC m=+1099.025473815" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.632102 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bjdj8" podStartSLOduration=3.6421529550000002 podStartE2EDuration="42.632082947s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:19.048280728 +0000 UTC m=+1058.459390657" lastFinishedPulling="2026-01-22 14:01:58.03821072 +0000 UTC m=+1097.449320649" observedRunningTime="2026-01-22 14:01:59.627475892 +0000 UTC m=+1099.038585821" watchObservedRunningTime="2026-01-22 14:01:59.632082947 +0000 UTC m=+1099.043192876" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.680429 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:59 crc kubenswrapper[4769]: W0122 14:01:59.682542 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab0b9a4_13fb_42b5_be06_1231f96c4016.slice/crio-d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce WatchSource:0}: Error finding container d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce: Status 404 returned error can't find the container with id d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.118904 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.460169 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.460925 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.530888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce"} Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.581661 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.582017 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.552778 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.557339 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerStarted","Data":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.558257 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571057 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571163 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" containerID="cri-o://c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" gracePeriod=30 Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571206 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" containerID="cri-o://e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" gracePeriod=30 Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.574747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.578422 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" podStartSLOduration=42.578403751 podStartE2EDuration="42.578403751s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:01.574266209 +0000 UTC m=+1100.985376138" watchObservedRunningTime="2026-01-22 14:02:01.578403751 +0000 UTC m=+1100.989513680" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.600889 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=42.600873411 podStartE2EDuration="42.600873411s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:01.596310047 +0000 UTC m=+1101.007419976" watchObservedRunningTime="2026-01-22 14:02:01.600873411 +0000 UTC m=+1101.011983340" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.147170 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180430 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180699 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180797 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180966 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181163 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181646 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs" (OuterVolumeSpecName: "logs") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.189374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts" (OuterVolumeSpecName: "scripts") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.189520 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6" (OuterVolumeSpecName: "kube-api-access-h46h6") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "kube-api-access-h46h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.193361 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.219096 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.239091 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data" (OuterVolumeSpecName: "config-data") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283744 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283785 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283845 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283860 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283884 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.304258 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.385513 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583407 4769 generic.go:334] "Generic (PLEG): container finished" podID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" exitCode=143 Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583437 4769 generic.go:334] "Generic (PLEG): container finished" podID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" exitCode=143 Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583461 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583583 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583617 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.601978 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.625948 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.629829 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.631817 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.637120 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.637179 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} err="failed to get container status \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.637207 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.638014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.638039 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} err="failed to get container status \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.638057 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.640468 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} err="failed to get container status \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.640582 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.641193 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} err="failed to get container status \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648401 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.648903 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648916 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.648939 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648947 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.649121 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.649137 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.650395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.655357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.655598 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.665310 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.794002 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795121 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795208 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795285 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795309 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795346 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795556 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.896900 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.896994 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897022 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897090 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897114 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" path="/var/lib/kubelet/pods/20251361-dc9f-403b-bffa-2a52a61e1bf4/volumes" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897206 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897283 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897748 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.906017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.909105 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.909165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.918603 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.932847 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.952916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.028979 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.598281 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f"} Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.633530 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:03 crc kubenswrapper[4769]: W0122 14:02:03.654291 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49bcd071_b172_4180_996d_a8494ce80ab7.slice/crio-c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922 WatchSource:0}: Error finding container c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922: Status 404 returned error can't find the container with id c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.625718 4769 generic.go:334] "Generic (PLEG): container finished" podID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerID="7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c" exitCode=0 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.626282 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerDied","Data":"7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.637948 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.638012 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.643283 4769 generic.go:334] "Generic (PLEG): container finished" podID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerID="4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c" exitCode=0 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.644417 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerDied","Data":"4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.660984 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.660961935 podStartE2EDuration="6.660961935s" podCreationTimestamp="2026-01-22 14:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:03.63442585 +0000 UTC m=+1103.045535779" watchObservedRunningTime="2026-01-22 14:02:04.660961935 +0000 UTC m=+1104.072071864" Jan 22 14:02:05 crc kubenswrapper[4769]: I0122 14:02:05.656957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07"} Jan 22 14:02:05 crc kubenswrapper[4769]: I0122 14:02:05.689475 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.689458383 podStartE2EDuration="3.689458383s" podCreationTimestamp="2026-01-22 14:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:05.681778075 +0000 UTC m=+1105.092888024" watchObservedRunningTime="2026-01-22 14:02:05.689458383 +0000 UTC m=+1105.100568312" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.163291 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.168660 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268349 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268468 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268496 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268518 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268572 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268651 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268688 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268740 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268767 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.269748 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs" (OuterVolumeSpecName: "logs") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.275027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.278435 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj" (OuterVolumeSpecName: "kube-api-access-dsgsj") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "kube-api-access-dsgsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279264 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp" (OuterVolumeSpecName: "kube-api-access-g78xp") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "kube-api-access-g78xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts" (OuterVolumeSpecName: "scripts") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.281207 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts" (OuterVolumeSpecName: "scripts") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.300879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.323349 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data" (OuterVolumeSpecName: "config-data") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.337083 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.338891 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data" (OuterVolumeSpecName: "config-data") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371013 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371054 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371066 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371075 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371087 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371097 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371107 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371115 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371122 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371130 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371137 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673633 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerDied","Data":"db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3"} Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673957 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673656 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.683069 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.684281 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerDied","Data":"01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9"} Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.684332 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.759606 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:06 crc kubenswrapper[4769]: E0122 14:02:06.760191 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760211 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: E0122 14:02:06.760245 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760253 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760480 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760501 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.761810 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771366 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771490 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dx89d" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771749 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.772133 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.781657 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.789924 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.842916 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.845622 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851526 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851705 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851876 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852098 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852430 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852568 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.857408 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880820 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880932 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.881004 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.881025 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985508 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985603 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985680 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985706 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985734 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986428 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986485 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986912 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986978 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.987034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.987132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.990170 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.992531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.992555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.995105 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.995680 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.996547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.005625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.088997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089098 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089168 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089245 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089278 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089366 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.092972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.093641 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.096776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.097756 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.098691 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.099073 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.100667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.111254 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.111319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.171591 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.628329 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:07 crc kubenswrapper[4769]: W0122 14:02:07.629874 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d4588b0_8c00_47bf_8b6d_cab4a5d792ab.slice/crio-7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75 WatchSource:0}: Error finding container 7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75: Status 404 returned error can't find the container with id 7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75 Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.689141 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75"} Jan 22 14:02:07 crc kubenswrapper[4769]: W0122 14:02:07.730474 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb12191_d02d_4e79_82cd_d164ecaf2093.slice/crio-aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75 WatchSource:0}: Error finding container aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75: Status 404 returned error can't find the container with id aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75 Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.735985 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.699781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"7ec46ba8e82a290a46ebf843a25c1a4fc603d2f84ba0a9b9cc0de812101e9505"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8d684bc6-pmxwh" event={"ID":"ddb12191-d02d-4e79-82cd-d164ecaf2093","Type":"ContainerStarted","Data":"1617ab3d54fab1a56702f1417356dc7a33c92b9329ac93066aec0d9955c04658"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8d684bc6-pmxwh" event={"ID":"ddb12191-d02d-4e79-82cd-d164ecaf2093","Type":"ContainerStarted","Data":"aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701673 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.721322 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d8d684bc6-pmxwh" podStartSLOduration=2.721301402 podStartE2EDuration="2.721301402s" podCreationTimestamp="2026-01-22 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:08.71500385 +0000 UTC m=+1108.126113779" watchObservedRunningTime="2026-01-22 14:02:08.721301402 +0000 UTC m=+1108.132411331" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.896861 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.896910 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.935268 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.939361 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.681458 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.726620 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.726673 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.775032 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.775630 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-twczw" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" containerID="cri-o://098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" gracePeriod=10 Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.357263 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.461610 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.473666 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474260 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474328 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474409 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474481 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.479471 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk" (OuterVolumeSpecName: "kube-api-access-ssjkk") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "kube-api-access-ssjkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.481718 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.481758 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.523180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.531147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config" (OuterVolumeSpecName: "config") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.531317 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.540386 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576892 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576934 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576947 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576960 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576972 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.582405 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cc4c8d8bd-69kmb" podUID="9a6a04bb-fa49-41f8-b75b-9c27873f8a1f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.742396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"824b009f42f5d4ef849d8ad3db01e1ccf33eb73bee32e627513e9c3e9f3bd7ed"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.743961 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.744023 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.752581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763215 4769 generic.go:334] "Generic (PLEG): container finished" podID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" exitCode=0 Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763297 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763389 4769 scope.go:117] "RemoveContainer" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.776192 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b8cb8655d-vl7kp" podStartSLOduration=4.776173099 podStartE2EDuration="4.776173099s" podCreationTimestamp="2026-01-22 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:10.770128326 +0000 UTC m=+1110.181238255" watchObservedRunningTime="2026-01-22 14:02:10.776173099 +0000 UTC m=+1110.187283028" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.804543 4769 scope.go:117] "RemoveContainer" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.808693 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.836334 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.847816 4769 scope.go:117] "RemoveContainer" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: E0122 14:02:10.849195 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": container with ID starting with 098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b not found: ID does not exist" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.849249 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} err="failed to get container status \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": rpc error: code = NotFound desc = could not find container \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": container with ID starting with 098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b not found: ID does not exist" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.849274 4769 scope.go:117] "RemoveContainer" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: E0122 14:02:10.850156 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": container with ID starting with 1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5 not found: ID does not exist" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.850184 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5"} err="failed to get container status \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": rpc error: code = NotFound desc = could not find container \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": container with ID starting with 1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5 not found: ID does not exist" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.897245 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" path="/var/lib/kubelet/pods/650dfc14-f283-4318-b6bc-4b17cdea15fa/volumes" Jan 22 14:02:11 crc kubenswrapper[4769]: I0122 14:02:11.773206 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:11 crc kubenswrapper[4769]: I0122 14:02:11.773549 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.450785 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.485669 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.784621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerStarted","Data":"5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590"} Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.790592 4769 generic.go:334] "Generic (PLEG): container finished" podID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerID="3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b" exitCode=0 Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.790689 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerDied","Data":"3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b"} Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.826825 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-l4hnw" podStartSLOduration=3.278495364 podStartE2EDuration="55.826784512s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:18.946661322 +0000 UTC m=+1058.357771251" lastFinishedPulling="2026-01-22 14:02:11.49495047 +0000 UTC m=+1110.906060399" observedRunningTime="2026-01-22 14:02:12.813607104 +0000 UTC m=+1112.224717033" watchObservedRunningTime="2026-01-22 14:02:12.826784512 +0000 UTC m=+1112.237894441" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.029344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.029689 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.089374 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.100332 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.813720 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerStarted","Data":"fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811"} Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.815454 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.815506 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.842764 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zzjpd" podStartSLOduration=3.1365269749999998 podStartE2EDuration="56.842745529s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:18.962989665 +0000 UTC m=+1058.374099594" lastFinishedPulling="2026-01-22 14:02:12.669208219 +0000 UTC m=+1112.080318148" observedRunningTime="2026-01-22 14:02:13.838397961 +0000 UTC m=+1113.249507890" watchObservedRunningTime="2026-01-22 14:02:13.842745529 +0000 UTC m=+1113.253855448" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.242865 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359144 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359287 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.387002 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc" (OuterVolumeSpecName: "kube-api-access-rzsdc") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "kube-api-access-rzsdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.396888 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config" (OuterVolumeSpecName: "config") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.401964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462076 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462116 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462136 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824162 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerDied","Data":"f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d"} Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824254 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.100812 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101215 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101238 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101257 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101265 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101281 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="init" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101289 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="init" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101535 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101581 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.102713 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.140382 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.203914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.206269 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216219 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216440 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216723 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216834 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216963 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7p5j2" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.227972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284389 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284521 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284553 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284580 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.385931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.385989 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386123 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386256 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.387636 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.387738 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.388109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.388478 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.411687 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.453037 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488570 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.500606 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.500820 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.501660 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.514686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.525498 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.551208 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.831572 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.831926 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.057531 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:16 crc kubenswrapper[4769]: W0122 14:02:16.067301 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc490b1f2_d1fa_4db7_8aeb_97c8bb694323.slice/crio-793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710 WatchSource:0}: Error finding container 793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710: Status 404 returned error can't find the container with id 793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710 Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.231990 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.351585 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.438697 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.734376 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875111 4769 generic.go:334] "Generic (PLEG): container finished" podID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" exitCode=0 Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerStarted","Data":"793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.929549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.929615 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"7728df5824bdc02cf7f433c8c65dbea0209e0b45bf371c7fd3ff2a02c06db9ef"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.342819 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.345191 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.346827 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.349385 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.353066 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483817 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483851 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483946 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.484034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.585622 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586123 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586174 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.592828 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.593495 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.593936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.594090 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.599331 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.601992 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.610156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.686374 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.926768 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.927024 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.929938 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerStarted","Data":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.930326 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.963460 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7ffdb95bfd-x5vfj" podStartSLOduration=2.963442402 podStartE2EDuration="2.963442402s" podCreationTimestamp="2026-01-22 14:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:17.948427105 +0000 UTC m=+1117.359537044" watchObservedRunningTime="2026-01-22 14:02:17.963442402 +0000 UTC m=+1117.374552331" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.975998 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podStartSLOduration=2.975976912 podStartE2EDuration="2.975976912s" podCreationTimestamp="2026-01-22 14:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:17.971073849 +0000 UTC m=+1117.382183798" watchObservedRunningTime="2026-01-22 14:02:17.975976912 +0000 UTC m=+1117.387086831" Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.460051 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.940896 4769 generic.go:334] "Generic (PLEG): container finished" podID="a7f766e1-262c-4861-a117-2454631e284f" containerID="fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811" exitCode=0 Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.941098 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerDied","Data":"fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811"} Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.944036 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"5a0f367e33b6d3fac05f5d699bddf82b4168cc01b56962481ed708c42f0fa01e"} Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.944074 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"3b057196b0db48832fa9a6e783c46500568af399275ddd4bc07b7490dfe7e4d5"} Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.460807 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.582198 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cc4c8d8bd-69kmb" podUID="9a6a04bb-fa49-41f8-b75b-9c27873f8a1f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.962942 4769 generic.go:334] "Generic (PLEG): container finished" podID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerID="5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590" exitCode=0 Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.962986 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerDied","Data":"5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590"} Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.561665 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702614 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702784 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.709568 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.709749 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7" (OuterVolumeSpecName: "kube-api-access-pbsw7") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "kube-api-access-pbsw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.739241 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804870 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804909 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804920 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011271 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerDied","Data":"d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2"} Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011580 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011650 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.032234 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerDied","Data":"81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6"} Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.032279 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.041742 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109421 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109469 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109513 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109598 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109718 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109759 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109978 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.110297 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.113180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.113437 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts" (OuterVolumeSpecName: "scripts") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.121212 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx" (OuterVolumeSpecName: "kube-api-access-hrgpx") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "kube-api-access-hrgpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.184895 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.201981 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data" (OuterVolumeSpecName: "config-data") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212294 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212335 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212351 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212366 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212377 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.282147 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.830672 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.831481 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831502 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.831552 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831561 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831785 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831841 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.832980 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.836825 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.839079 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qkkxv" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.839301 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.840853 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.912860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.914565 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.924507 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926113 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926221 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926406 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926511 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.942339 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.966354 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.966608 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" containerID="cri-o://dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" gracePeriod=10 Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.981967 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.017052 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.018559 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.028976 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029057 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029129 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029259 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029292 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029312 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.038029 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.039361 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.045964 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.052501 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.068595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.094767 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116238 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a"} Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116356 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" containerID="cri-o://0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116507 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" containerID="cri-o://de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116556 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" containerID="cri-o://bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.123155 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.124560 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.132699 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133839 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.134596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137202 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137773 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137862 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.144244 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.145696 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.152481 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.162526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.168289 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.169567 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"82a874788375fae26a0951e4470e5e91fb777e86404e359d8d7d7bad73728bb6"} Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.169965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.173617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.219664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240033 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240096 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240152 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240365 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240399 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240424 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240495 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.242765 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.242957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.243153 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.243719 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.250206 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.262178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.273975 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d6bcd56b9-2hx4m" podStartSLOduration=8.273955787 podStartE2EDuration="8.273955787s" podCreationTimestamp="2026-01-22 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:25.221838953 +0000 UTC m=+1124.632948892" watchObservedRunningTime="2026-01-22 14:02:25.273955787 +0000 UTC m=+1124.685065716" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.317450 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.342555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.348675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.365561 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.371090 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.385429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.403734 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.408105 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.413691 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.413839 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m6vjl" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.414072 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.414201 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.443962 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444013 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444037 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444063 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444110 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.446180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.454586 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548313 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548370 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548396 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548423 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548459 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.558286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.561831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.563178 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.579743 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.591615 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.598781 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.599657 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.631361 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.641065 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.672054 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.716274 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760496 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760557 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760593 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760674 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.763020 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.789291 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.792464 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.799164 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.832335 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864366 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864394 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864492 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864518 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864621 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864643 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.865754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.865929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.866402 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.866840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.872495 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.889863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966173 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966293 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966459 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966699 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.971393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.975054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.976704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.978126 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.984503 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.100503 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.145197 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.171478 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183014 4769 generic.go:334] "Generic (PLEG): container finished" podID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183070 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183089 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183120 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183138 4769 scope.go:117] "RemoveContainer" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.184483 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"c15dda8bdf2b7e8286d94f00e80ce04f6039691eef8d0e2a5c3246fe9de51dc2"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187519 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187599 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187608 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" exitCode=2 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187659 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187757 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.220584 4769 scope.go:117] "RemoveContainer" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.239942 4769 scope.go:117] "RemoveContainer" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: E0122 14:02:26.240372 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": container with ID starting with dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab not found: ID does not exist" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.240403 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} err="failed to get container status \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": rpc error: code = NotFound desc = could not find container \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": container with ID starting with dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab not found: ID does not exist" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.240424 4769 scope.go:117] "RemoveContainer" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: E0122 14:02:26.240778 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": container with ID starting with 20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1 not found: ID does not exist" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.241157 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1"} err="failed to get container status \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": rpc error: code = NotFound desc = could not find container \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": container with ID starting with 20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1 not found: ID does not exist" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.262198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277164 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277209 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277261 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277347 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.284379 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj" (OuterVolumeSpecName: "kube-api-access-7zhpj") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "kube-api-access-7zhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.297593 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.306615 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.379169 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.383974 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.394337 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.398297 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config" (OuterVolumeSpecName: "config") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.402436 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.407804 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.479192 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480408 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480430 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480441 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480450 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480460 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: W0122 14:02:26.481889 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4383579e_af20_4ae8_89f7_bdaf6480881a.slice/crio-f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc WatchSource:0}: Error finding container f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc: Status 404 returned error can't find the container with id f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.596886 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.726864 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.879771 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.915178 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.944498 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.953691 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.988891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989649 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989817 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990128 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990238 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.991258 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.991716 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.995561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts" (OuterVolumeSpecName: "scripts") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.995556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr" (OuterVolumeSpecName: "kube-api-access-bnkhr") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "kube-api-access-bnkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.045505 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092557 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092590 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092605 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092616 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092627 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.093883 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.097151 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data" (OuterVolumeSpecName: "config-data") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.195931 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.195992 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.199262 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"959f5ec3a165a64e510bc22f94aef93dcf00ba618851c77ce98857a8cd8feb32"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201670 4769 generic.go:334] "Generic (PLEG): container finished" podID="626171a3-dca4-4c26-9879-4127f41d2543" containerID="209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca" exitCode=0 Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201744 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerDied","Data":"209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201805 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerStarted","Data":"849a951f9f8aa32b267dc7a128a172f08b4ef52390b9e79aa78ce1d223d66cba"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.214892 4769 generic.go:334] "Generic (PLEG): container finished" podID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerID="8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3" exitCode=0 Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.215003 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.215275 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerStarted","Data":"d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.224060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.237717 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.237758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"9af8e79839bd151effc1aa29a1d456de2993b92396c6ddf4772fc15ecf95323b"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"21b21bef7c85b718cfdbb016fe626efbd1ab870c4b734875a383413b1b9ca2cc"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247242 4769 scope.go:117] "RemoveContainer" containerID="de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247290 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.267524 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"6f0f4cabb7f607a85e05f6796ffa4125f9f0133df87665b8443130a4140d00af"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.344634 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.363817 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.414419 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415333 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415366 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415401 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415409 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415431 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415438 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415454 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415460 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415480 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="init" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415487 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="init" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415974 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416009 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416041 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416069 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.418761 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.421453 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.421816 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.435592 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.604576 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605107 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605152 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605423 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605517 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707308 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707478 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707549 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707911 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.708642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.708972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.715661 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.716008 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.721294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.726218 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.735881 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.898169 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.040405 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216415 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216676 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216700 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216751 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216828 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.220710 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p" (OuterVolumeSpecName: "kube-api-access-x477p") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "kube-api-access-x477p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.238214 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.241409 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config" (OuterVolumeSpecName: "config") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.241561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.242573 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.242703 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.275781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.277502 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.277973 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerDied","Data":"849a951f9f8aa32b267dc7a128a172f08b4ef52390b9e79aa78ce1d223d66cba"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.280411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.281507 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.281580 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.288729 4769 scope.go:117] "RemoveContainer" containerID="bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.309673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podStartSLOduration=3.3096536690000002 podStartE2EDuration="3.309653669s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:28.300156202 +0000 UTC m=+1127.711266141" watchObservedRunningTime="2026-01-22 14:02:28.309653669 +0000 UTC m=+1127.720763598" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321182 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321216 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321228 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321240 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321252 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321263 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.340519 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.350115 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.375959 4769 scope.go:117] "RemoveContainer" containerID="0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.468053 4769 scope.go:117] "RemoveContainer" containerID="209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.345336 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626171a3-dca4-4c26-9879-4127f41d2543" path="/var/lib/kubelet/pods/626171a3-dca4-4c26-9879-4127f41d2543/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.356711 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" path="/var/lib/kubelet/pods/7464458e-c450-4b87-80d6-30abeb62e9d2/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.357928 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" path="/var/lib/kubelet/pods/c490b1f2-d1fa-4db7-8aeb-97c8bb694323/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.366711 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.370602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerStarted","Data":"fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8"} Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.372524 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.380200 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"65eb749f9ee1ea25ed9259f38da2dd786dfee88fb385aca20cfb1072c7036290"} Jan 22 14:02:29 crc kubenswrapper[4769]: W0122 14:02:29.393108 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode12c3fd8_b199_4dbb_8022_ea1997362b45.slice/crio-6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4 WatchSource:0}: Error finding container 6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4: Status 404 returned error can't find the container with id 6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4 Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.396737 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" podStartSLOduration=4.396712315 podStartE2EDuration="4.396712315s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:29.392921411 +0000 UTC m=+1128.804031350" watchObservedRunningTime="2026-01-22 14:02:29.396712315 +0000 UTC m=+1128.807822244" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.397334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"8f86e4936d00108837d120533e409cfd99d6a44762e0c92786aa925fd1727a56"} Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.471933 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.456736 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.457284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.459702 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerID="75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" exitCode=137 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.459945 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerID="24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" exitCode=137 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.460091 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.460144 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.473369 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.473613 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"9f6f770e0e0c87d16cef983ad2564a7a8925aa20d641e0e0a9d7c39d098160dc"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.475728 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.475940 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" containerID="cri-o://7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" gracePeriod=30 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.476250 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.476337 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" containerID="cri-o://a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" gracePeriod=30 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.481428 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.537676 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.537652971 podStartE2EDuration="5.537652971s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:30.527484546 +0000 UTC m=+1129.938594475" watchObservedRunningTime="2026-01-22 14:02:30.537652971 +0000 UTC m=+1129.948762900" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.551696 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"b2817ce04426bef01585797fb018136cc8619d5bb0b65d15bba8d2eeb6f1154f"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.580045 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" podStartSLOduration=4.375540776 podStartE2EDuration="6.58002744s" podCreationTimestamp="2026-01-22 14:02:24 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.155972803 +0000 UTC m=+1125.567082732" lastFinishedPulling="2026-01-22 14:02:28.360459477 +0000 UTC m=+1127.771569396" observedRunningTime="2026-01-22 14:02:30.570178063 +0000 UTC m=+1129.981287992" watchObservedRunningTime="2026-01-22 14:02:30.58002744 +0000 UTC m=+1129.991137369" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.582603 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-79fdf5695-77th5" podStartSLOduration=4.54352623 podStartE2EDuration="6.58259646s" podCreationTimestamp="2026-01-22 14:02:24 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.341701768 +0000 UTC m=+1125.752811697" lastFinishedPulling="2026-01-22 14:02:28.380771978 +0000 UTC m=+1127.791881927" observedRunningTime="2026-01-22 14:02:30.551309792 +0000 UTC m=+1129.962419731" watchObservedRunningTime="2026-01-22 14:02:30.58259646 +0000 UTC m=+1129.993706389" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.638116 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.638172 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639109 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639334 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.645334 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.646453 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs" (OuterVolumeSpecName: "logs") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.651106 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts" (OuterVolumeSpecName: "kube-api-access-q7lts") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "kube-api-access-q7lts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.667946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data" (OuterVolumeSpecName: "config-data") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.672619 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts" (OuterVolumeSpecName: "scripts") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742812 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742859 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742873 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742889 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742906 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.353510 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453595 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453635 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453737 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453757 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453926 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453948 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453972 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.454320 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.455361 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs" (OuterVolumeSpecName: "logs") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.461477 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts" (OuterVolumeSpecName: "scripts") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.461854 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6" (OuterVolumeSpecName: "kube-api-access-xnwg6") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "kube-api-access-xnwg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.463074 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.496860 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.533301 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data" (OuterVolumeSpecName: "config-data") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556872 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556900 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556909 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556920 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556928 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556936 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556947 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586057 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586692 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"054e89b41fe504baa24efa6fdc5ef87502ed22b3b42e8052873a0df4c426e7ed"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586724 4769 scope.go:117] "RemoveContainer" containerID="75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598849 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" exitCode=0 Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598880 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" exitCode=143 Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598915 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598942 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"959f5ec3a165a64e510bc22f94aef93dcf00ba618851c77ce98857a8cd8feb32"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.599004 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.612586 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.630652 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.645045 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.767595781 podStartE2EDuration="6.645026268s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.482906717 +0000 UTC m=+1125.894016646" lastFinishedPulling="2026-01-22 14:02:28.360337204 +0000 UTC m=+1127.771447133" observedRunningTime="2026-01-22 14:02:31.640189967 +0000 UTC m=+1131.051299906" watchObservedRunningTime="2026-01-22 14:02:31.645026268 +0000 UTC m=+1131.056136197" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.710184 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.725154 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.737575 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.745878 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763234 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763619 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763631 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763654 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763660 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763673 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763680 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763700 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763705 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763717 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763722 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763926 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763952 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763967 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763980 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.764001 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.765028 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.767890 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.769236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.770271 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.828293 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862759 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862785 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862893 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.871582 4769 scope.go:117] "RemoveContainer" containerID="24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.889660 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.921323 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964510 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964565 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964980 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965041 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965066 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965091 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965689 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965976 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.971476 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.971846 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.972280 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.972339 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.973970 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974004 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} err="failed to get container status \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974042 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.974443 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974460 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} err="failed to get container status \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974483 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974746 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} err="failed to get container status \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974765 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.975049 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} err="failed to get container status \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.983603 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.986114 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.987831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:31.990536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.097297 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.104734 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.106219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.108183 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.113958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.138126 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.273782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274565 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274640 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274752 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274871 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274925 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377627 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377749 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.378056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.378148 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.380692 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.393903 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.394338 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.394836 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.395713 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.401080 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.414027 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.498309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.593028 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:32 crc kubenswrapper[4769]: W0122 14:02:32.595845 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66670ed_ef72_4a45_be6e_add4b5f52f94.slice/crio-7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea WatchSource:0}: Error finding container 7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea: Status 404 returned error can't find the container with id 7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.682843 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c"} Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.688357 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea"} Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.717655 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.894926 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" path="/var/lib/kubelet/pods/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1/volumes" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.896002 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" path="/var/lib/kubelet/pods/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5/volumes" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.045576 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.054549 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.712939 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.713344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.727190 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"a39e387f5cb6796bd5245099577a041d4535330335336f500889fa062380c528"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.743633 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.007732393 podStartE2EDuration="6.743608251s" podCreationTimestamp="2026-01-22 14:02:27 +0000 UTC" firstStartedPulling="2026-01-22 14:02:29.420452808 +0000 UTC m=+1128.831562727" lastFinishedPulling="2026-01-22 14:02:33.156328656 +0000 UTC m=+1132.567438585" observedRunningTime="2026-01-22 14:02:33.735533561 +0000 UTC m=+1133.146643500" watchObservedRunningTime="2026-01-22 14:02:33.743608251 +0000 UTC m=+1133.154718180" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746404 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"2754cbe24003d84d5d8ab18a809cd82431ec14af97d42ed25eaba73bf5c21e5d"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746465 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"0ab896500ec150c8bf3bca58b8d802dfbbe37af095176eabc94f4c8827641c93"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746479 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"d272230a7a62bb7d6abfae5a7ba1a9c5070f1e0d62a268cba218e9edc3a00fb2"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746865 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746890 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.784515 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5765d95c66-48prv" podStartSLOduration=1.7844932390000001 podStartE2EDuration="1.784493239s" podCreationTimestamp="2026-01-22 14:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:33.778872087 +0000 UTC m=+1133.189982026" watchObservedRunningTime="2026-01-22 14:02:33.784493239 +0000 UTC m=+1133.195603198" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.640434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.744910 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.747898 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"087ace9ba575af2007578e297c79bfb3494a65af68fe7fbcd4c9a7bfe7e38a7a"} Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758115 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758221 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" containerID="cri-o://b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" gracePeriod=30 Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758911 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" containerID="cri-o://dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" gracePeriod=30 Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.811641 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.811616 podStartE2EDuration="3.811616s" podCreationTimestamp="2026-01-22 14:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:34.799341187 +0000 UTC m=+1134.210451116" watchObservedRunningTime="2026-01-22 14:02:34.811616 +0000 UTC m=+1134.222725929" Jan 22 14:02:35 crc kubenswrapper[4769]: I0122 14:02:35.767004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.014962 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.102778 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.188683 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.188947 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" containerID="cri-o://83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" gracePeriod=10 Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.729130 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.776816 4769 generic.go:334] "Generic (PLEG): container finished" podID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" exitCode=0 Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.776903 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.777007 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.777042 4769 scope.go:117] "RemoveContainer" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.778662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"de08ee3bddd1437f1405dc62dcd35ee86837e2196876742c81be83ac8aaa6642"} Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.816541 4769 scope.go:117] "RemoveContainer" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.819135 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851135 4769 scope.go:117] "RemoveContainer" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: E0122 14:02:36.851605 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": container with ID starting with 83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a not found: ID does not exist" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851647 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} err="failed to get container status \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": rpc error: code = NotFound desc = could not find container \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": container with ID starting with 83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a not found: ID does not exist" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851674 4769 scope.go:117] "RemoveContainer" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: E0122 14:02:36.852013 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": container with ID starting with 5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d not found: ID does not exist" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.852055 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d"} err="failed to get container status \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": rpc error: code = NotFound desc = could not find container \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": container with ID starting with 5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d not found: ID does not exist" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877551 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878658 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878786 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.890618 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9" (OuterVolumeSpecName: "kube-api-access-w9lf9") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "kube-api-access-w9lf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.932039 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config" (OuterVolumeSpecName: "config") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.935315 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.937713 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.952567 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.955742 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981011 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981044 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981056 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981105 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981118 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981125 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.071801 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.134862 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.139675 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.293320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.788390 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" containerID="cri-o://3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" gracePeriod=30 Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.789017 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" containerID="cri-o://b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" gracePeriod=30 Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.810905 4769 generic.go:334] "Generic (PLEG): container finished" podID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" exitCode=0 Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.811011 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.855742 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.895837 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" path="/var/lib/kubelet/pods/09f60324-cca8-4988-bf9b-6967d2bfe9f6/volumes" Jan 22 14:02:39 crc kubenswrapper[4769]: I0122 14:02:39.843591 4769 generic.go:334] "Generic (PLEG): container finished" podID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" exitCode=0 Jan 22 14:02:39 crc kubenswrapper[4769]: I0122 14:02:39.843816 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.460784 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.481541 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.481607 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.171743 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:41 crc kubenswrapper[4769]: E0122 14:02:41.172413 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172427 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: E0122 14:02:41.172449 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="init" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172455 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="init" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172625 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.173226 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.179993 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.180105 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mtjrf" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.180326 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.190463 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286435 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.389552 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.394488 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.395169 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.410397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.495779 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.116236 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:42 crc kubenswrapper[4769]: W0122 14:02:42.120340 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46459a9_7fab_439c_95fe_5d6cdcb16997.slice/crio-a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939 WatchSource:0}: Error finding container a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939: Status 404 returned error can't find the container with id a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939 Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.321675 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423357 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423486 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423571 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423670 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.424025 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.431966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts" (OuterVolumeSpecName: "scripts") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.434100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.437621 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx" (OuterVolumeSpecName: "kube-api-access-462mx") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "kube-api-access-462mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.516095 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526011 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526040 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526050 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526061 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.563920 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data" (OuterVolumeSpecName: "config-data") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.628129 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935023 4769 generic.go:334] "Generic (PLEG): container finished" podID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" exitCode=0 Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935110 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935141 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935494 4769 scope.go:117] "RemoveContainer" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.941609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a46459a9-7fab-439c-95fe-5d6cdcb16997","Type":"ContainerStarted","Data":"a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.968589 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.977280 4769 scope.go:117] "RemoveContainer" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.987930 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.006864 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.007312 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007329 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.007346 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007356 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007554 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007580 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.008533 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.016268 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.024034 4769 scope.go:117] "RemoveContainer" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.024399 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.028264 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": container with ID starting with b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74 not found: ID does not exist" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.028318 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} err="failed to get container status \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": rpc error: code = NotFound desc = could not find container \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": container with ID starting with b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74 not found: ID does not exist" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.028352 4769 scope.go:117] "RemoveContainer" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.032351 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": container with ID starting with 3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32 not found: ID does not exist" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.032388 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} err="failed to get container status \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": rpc error: code = NotFound desc = could not find container \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": container with ID starting with 3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32 not found: ID does not exist" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142203 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142478 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142568 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244573 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244962 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.249345 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.256612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.256996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.257433 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.288279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.340771 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.906213 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: W0122 14:02:43.909370 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f275_d56c_4f3d_a8fd_7e5c4e2da02e.slice/crio-a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802 WatchSource:0}: Error finding container a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802: Status 404 returned error can't find the container with id a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802 Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.952224 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802"} Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.561281 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.600070 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.734663 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.816303 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.816773 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" containerID="cri-o://04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" gracePeriod=30 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.817426 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" containerID="cri-o://d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" gracePeriod=30 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.921285 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" path="/var/lib/kubelet/pods/4383579e-af20-4ae8-89f7-bdaf6480881a/volumes" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.996307 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"504d239ec4be194cb42134b743bbbccfa90e53d23b1ba970b9ad6cf450ba4478"} Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.998314 4769 generic.go:334] "Generic (PLEG): container finished" podID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" exitCode=143 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.999201 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} Jan 22 14:02:45 crc kubenswrapper[4769]: I0122 14:02:45.563095 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:46 crc kubenswrapper[4769]: I0122 14:02:46.010350 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"86ade0dfcf9afcd576932a25c11fa146cc4582a1aad43558d46829daa678ba95"} Jan 22 14:02:46 crc kubenswrapper[4769]: I0122 14:02:46.039354 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.039331659 podStartE2EDuration="4.039331659s" podCreationTimestamp="2026-01-22 14:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:46.029809311 +0000 UTC m=+1145.440919240" watchObservedRunningTime="2026-01-22 14:02:46.039331659 +0000 UTC m=+1145.450441588" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.657275 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.659573 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666730 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666730 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.670519 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.700643 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764007 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764258 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffdb95bfd-x5vfj" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" containerID="cri-o://c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" gracePeriod=30 Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764734 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffdb95bfd-x5vfj" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" containerID="cri-o://1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" gracePeriod=30 Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848813 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848867 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848950 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849061 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849184 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950269 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950327 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950351 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950381 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950430 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.952247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.954308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.956445 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.957117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960534 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960963 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.974703 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.983186 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.018483 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41048->10.217.0.161:9311: read: connection reset by peer" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.018518 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41064->10.217.0.161:9311: read: connection reset by peer" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.050281 4769 generic.go:334] "Generic (PLEG): container finished" podID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerID="1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" exitCode=0 Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.050324 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0"} Jan 22 14:02:48 crc kubenswrapper[4769]: E0122 14:02:48.190338 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaad9379_b67a_4b3a_8cc9_f37d9ad425e8.slice/crio-d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20.scope\": RecentStats: unable to find data in memory cache]" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.341868 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.507397 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665306 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665377 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665465 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665599 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.670990 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs" (OuterVolumeSpecName: "logs") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.674036 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m" (OuterVolumeSpecName: "kube-api-access-br56m") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "kube-api-access-br56m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.676489 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.696438 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.733520 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data" (OuterVolumeSpecName: "config-data") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.739539 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:48 crc kubenswrapper[4769]: W0122 14:02:48.746029 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75afafe2_c784_45fa_8104_1115c8921138.slice/crio-e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2 WatchSource:0}: Error finding container e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2: Status 404 returned error can't find the container with id e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2 Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767636 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767670 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767683 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767692 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767701 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.067847 4769 generic.go:334] "Generic (PLEG): container finished" podID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" exitCode=0 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068016 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068458 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068517 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"9af8e79839bd151effc1aa29a1d456de2993b92396c6ddf4772fc15ecf95323b"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068537 4769 scope.go:117] "RemoveContainer" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.075772 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.098367 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.107664 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.846615 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848165 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" containerID="cri-o://3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848202 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" containerID="cri-o://b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848196 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" containerID="cri-o://0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848300 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" containerID="cri-o://81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.863429 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090097 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" exitCode=0 Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090475 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" exitCode=2 Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3"} Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090527 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c"} Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.460671 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.896158 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" path="/var/lib/kubelet/pods/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8/volumes" Jan 22 14:02:51 crc kubenswrapper[4769]: I0122 14:02:51.115509 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" exitCode=0 Jan 22 14:02:51 crc kubenswrapper[4769]: I0122 14:02:51.115557 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31"} Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.530561 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.531043 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" containerID="cri-o://df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" gracePeriod=30 Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.531112 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" containerID="cri-o://42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" gracePeriod=30 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.137674 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerID="df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" exitCode=143 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.137745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.145091 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.145160 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" exitCode=0 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.148223 4769 generic.go:334] "Generic (PLEG): container finished" podID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerID="c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" exitCode=0 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.148258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.545965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344412 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:54 crc kubenswrapper[4769]: E0122 14:02:54.344894 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344918 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: E0122 14:02:54.344934 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345153 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345174 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345890 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.394508 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.443693 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.447118 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.469639 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.490875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.491266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.554241 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.555598 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.555717 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.559279 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.593144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.593924 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594102 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594293 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.624283 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.663129 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.665899 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.672872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.689272 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.691386 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695490 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695559 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695683 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.696619 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.697925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.723113 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.732406 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.733107 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.767587 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797902 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797969 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798088 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798447 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798749 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.833165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.871212 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.872538 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.874818 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.878955 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901420 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901528 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901665 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.902191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.903407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.911755 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.918648 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.918947 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" containerID="cri-o://938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" gracePeriod=30 Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.919461 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" containerID="cri-o://a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" gracePeriod=30 Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.927925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.950928 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.995031 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.003382 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.003497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.017806 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.104659 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.104787 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.105520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.121634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.171124 4769 generic.go:334] "Generic (PLEG): container finished" podID="49bcd071-b172-4180-996d-a8494ce80ab7" containerID="938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" exitCode=143 Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.171161 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40"} Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.204811 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.486461 4769 scope.go:117] "RemoveContainer" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.622381 4769 scope.go:117] "RemoveContainer" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:55 crc kubenswrapper[4769]: E0122 14:02:55.622989 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": container with ID starting with d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20 not found: ID does not exist" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623024 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} err="failed to get container status \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": rpc error: code = NotFound desc = could not find container \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": container with ID starting with d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20 not found: ID does not exist" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623051 4769 scope.go:117] "RemoveContainer" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: E0122 14:02:55.623557 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": container with ID starting with 04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908 not found: ID does not exist" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623593 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} err="failed to get container status \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": rpc error: code = NotFound desc = could not find container \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": container with ID starting with 04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908 not found: ID does not exist" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.872937 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.103829 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.197032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerStarted","Data":"33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.204600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"c1234bb42d52ecd3fa353dab10a5ae2fa88e278117102689bcadb087bebbc3a7"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.205842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerStarted","Data":"fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.209265 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerID="42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" exitCode=0 Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.209294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.470580 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.566984 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.575062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.590149 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.609783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639744 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.640480 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.640541 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.644043 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.672031 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z" (OuterVolumeSpecName: "kube-api-access-jtk6z") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "kube-api-access-jtk6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.680680 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742077 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742165 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742243 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742400 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742727 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742739 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.750124 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.751902 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.778094 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7" (OuterVolumeSpecName: "kube-api-access-l6zn7") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "kube-api-access-l6zn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.781763 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts" (OuterVolumeSpecName: "scripts") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843332 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: W0122 14:02:56.843879 4769 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e12c3fd8-b199-4dbb-8022-ea1997362b45/volumes/kubernetes.io~secret/sg-core-conf-yaml Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844645 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844665 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844675 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844685 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844693 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config" (OuterVolumeSpecName: "config") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.857057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.872749 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.910191 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data" (OuterVolumeSpecName: "config-data") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.943538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948930 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948965 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948978 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948987 4769 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948995 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.098904 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"7728df5824bdc02cf7f433c8c65dbea0209e0b45bf371c7fd3ff2a02c06db9ef"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226439 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226480 4769 scope.go:117] "RemoveContainer" containerID="1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.229750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerStarted","Data":"9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.244639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"07a0d1ba9cf45b0092b37dd1c4795758a1430bdbc3cc5c2cd6708ce728099eba"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.245018 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.245039 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256056 4769 generic.go:334] "Generic (PLEG): container finished" podID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerID="751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97" exitCode=0 Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerDied","Data":"751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256392 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256495 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256545 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256603 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256665 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257250 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257279 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs" (OuterVolumeSpecName: "logs") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257626 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257650 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.282941 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.283029 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.289705 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr" (OuterVolumeSpecName: "kube-api-access-c2ptr") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "kube-api-access-c2ptr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.291815 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-576cb8587-7cl26" podStartSLOduration=10.291775929 podStartE2EDuration="10.291775929s" podCreationTimestamp="2026-01-22 14:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:57.277967684 +0000 UTC m=+1156.689077623" watchObservedRunningTime="2026-01-22 14:02:57.291775929 +0000 UTC m=+1156.702885858" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.298384 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts" (OuterVolumeSpecName: "scripts") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.302642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.303002 4769 generic.go:334] "Generic (PLEG): container finished" podID="ecb8a996-384c-4155-b45d-6a6335165545" containerID="be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1" exitCode=0 Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.303094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerDied","Data":"be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.305811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerStarted","Data":"3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.314616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerStarted","Data":"08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.319268 4769 scope.go:117] "RemoveContainer" containerID="c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.335609 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.337130 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerStarted","Data":"c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.349818 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.351303 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.351453 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361290 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361332 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361344 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.370538 4769 scope.go:117] "RemoveContainer" containerID="42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.423659 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.431568 4769 scope.go:117] "RemoveContainer" containerID="df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.431701 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.440091 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.440604 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441415 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441530 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441635 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441711 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441773 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441869 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442015 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.442106 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442218 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.442692 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442834 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443022 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.443122 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443203 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443532 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443691 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444270 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444407 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444522 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444627 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444743 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444865 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.449482 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.451985 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.454122 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.455902 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.476650 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.484755 4769 scope.go:117] "RemoveContainer" containerID="0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.518040 4769 scope.go:117] "RemoveContainer" containerID="b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.518783 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.552217 4769 scope.go:117] "RemoveContainer" containerID="81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.563343 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data" (OuterVolumeSpecName: "config-data") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565612 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565663 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565690 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565714 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565732 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565841 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565928 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565942 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565956 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565968 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.611251 4769 scope.go:117] "RemoveContainer" containerID="3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.670327 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671346 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671613 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.675635 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.677619 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.678094 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.688163 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.695391 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.697497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.738724 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.740460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.743187 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.743359 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.773280 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.780128 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874642 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874723 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874805 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875069 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875087 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875135 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977412 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977819 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977991 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978077 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978122 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978146 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.980844 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.981309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.981938 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.984802 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.985355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.987137 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.990587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.003953 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.019658 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.095900 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.253358 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:58 crc kubenswrapper[4769]: W0122 14:02:58.290247 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11b92673_89ea_4ef5_87f5_743e06fcb861.slice/crio-90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c WatchSource:0}: Error finding container 90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c: Status 404 returned error can't find the container with id 90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.376391 4769 generic.go:334] "Generic (PLEG): container finished" podID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerID="afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.376842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerDied","Data":"afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.381127 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a46459a9-7fab-439c-95fe-5d6cdcb16997","Type":"ContainerStarted","Data":"041dbb0cf121e394f1c409f34093072bd77aeb78a757dac85ac4af70442e6978"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.392532 4769 generic.go:334] "Generic (PLEG): container finished" podID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerID="35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.392611 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerDied","Data":"35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.417891 4769 generic.go:334] "Generic (PLEG): container finished" podID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerID="5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.418117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerDied","Data":"5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.423378 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.445472 4769 generic.go:334] "Generic (PLEG): container finished" podID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerID="98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.445529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerDied","Data":"98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.448322 4769 generic.go:334] "Generic (PLEG): container finished" podID="49bcd071-b172-4180-996d-a8494ce80ab7" containerID="a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.449087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.451604 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.272708817 podStartE2EDuration="17.451593168s" podCreationTimestamp="2026-01-22 14:02:41 +0000 UTC" firstStartedPulling="2026-01-22 14:02:42.122811962 +0000 UTC m=+1141.533921891" lastFinishedPulling="2026-01-22 14:02:56.301696313 +0000 UTC m=+1155.712806242" observedRunningTime="2026-01-22 14:02:58.438181264 +0000 UTC m=+1157.849291193" watchObservedRunningTime="2026-01-22 14:02:58.451593168 +0000 UTC m=+1157.862703097" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.620901 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.898639 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" path="/var/lib/kubelet/pods/0783e518-6a8e-43a3-9b33-4d0710f958f6/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.900223 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" path="/var/lib/kubelet/pods/dab0b9a4-13fb-42b5-be06-1231f96c4016/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.901944 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" path="/var/lib/kubelet/pods/e12c3fd8-b199-4dbb-8022-ea1997362b45/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.979984 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.991021 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.999577 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.022905 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.106626 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"ecb8a996-384c-4155-b45d-6a6335165545\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107003 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107037 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107107 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107137 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107208 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"fe68065a-9702-4440-a09a-2698d21ad5cc\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107265 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107323 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"fe68065a-9702-4440-a09a-2698d21ad5cc\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107898 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.108260 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe68065a-9702-4440-a09a-2698d21ad5cc" (UID: "fe68065a-9702-4440-a09a-2698d21ad5cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.110274 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs" (OuterVolumeSpecName: "logs") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.114973 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115033 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115067 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd" (OuterVolumeSpecName: "kube-api-access-8rwcd") pod "ecb8a996-384c-4155-b45d-6a6335165545" (UID: "ecb8a996-384c-4155-b45d-6a6335165545"). InnerVolumeSpecName "kube-api-access-8rwcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115102 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"ecb8a996-384c-4155-b45d-6a6335165545\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115853 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115881 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115891 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115899 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115908 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.117226 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt" (OuterVolumeSpecName: "kube-api-access-4h7rt") pod "fe68065a-9702-4440-a09a-2698d21ad5cc" (UID: "fe68065a-9702-4440-a09a-2698d21ad5cc"). InnerVolumeSpecName "kube-api-access-4h7rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.118066 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecb8a996-384c-4155-b45d-6a6335165545" (UID: "ecb8a996-384c-4155-b45d-6a6335165545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.120229 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts" (OuterVolumeSpecName: "scripts") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.120277 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722" (OuterVolumeSpecName: "kube-api-access-tk722") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "kube-api-access-tk722". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.143189 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.162304 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.196706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data" (OuterVolumeSpecName: "config-data") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218102 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218143 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218156 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218174 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218188 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218196 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218204 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.223959 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.320056 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.460047 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"8ad6347010d6112ee922996a6b2ff35db5d866513c76ddfc4c83fac04ed5249f"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462943 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerDied","Data":"fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462985 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462992 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.464839 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468528 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468816 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerDied","Data":"33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468860 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473871 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473916 4769 scope.go:117] "RemoveContainer" containerID="a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473970 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.539142 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.553807 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.555979 4769 scope.go:117] "RemoveContainer" containerID="938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.563952 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564650 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.564733 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564821 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.564905 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564967 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565022 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.565094 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565146 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565374 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565488 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565574 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565652 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.567693 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.570911 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.571169 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.590379 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731883 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731954 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732043 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732079 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732230 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862963 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863043 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863083 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863119 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863576 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863955 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.865077 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.868582 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.868596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.871572 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.873411 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.893119 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.937211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.023476 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.078248 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.100297 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.121753 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169484 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169699 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169744 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.170777 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b33b7a35-52b8-47c6-b5a7-5cf87d838927" (UID: "b33b7a35-52b8-47c6-b5a7-5cf87d838927"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.170879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" (UID: "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.174324 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l" (OuterVolumeSpecName: "kube-api-access-g9z8l") pod "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" (UID: "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce"). InnerVolumeSpecName "kube-api-access-g9z8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.174869 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9" (OuterVolumeSpecName: "kube-api-access-gw7l9") pod "b33b7a35-52b8-47c6-b5a7-5cf87d838927" (UID: "b33b7a35-52b8-47c6-b5a7-5cf87d838927"). InnerVolumeSpecName "kube-api-access-gw7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.200076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271656 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271884 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271910 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272483 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272503 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272515 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272525 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.273323 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" (UID: "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.273574 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" (UID: "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.275527 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9" (OuterVolumeSpecName: "kube-api-access-mk2z9") pod "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" (UID: "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17"). InnerVolumeSpecName "kube-api-access-mk2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.276482 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8" (OuterVolumeSpecName: "kube-api-access-2p7k8") pod "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" (UID: "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d"). InnerVolumeSpecName "kube-api-access-2p7k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.374673 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375043 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375054 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375065 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.461567 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.461726 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.515509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520556 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520567 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerDied","Data":"08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520602 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526752 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerDied","Data":"c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526754 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526800 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531689 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerDied","Data":"9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531729 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531894 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537269 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerDied","Data":"3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537306 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537359 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.544563 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"4e7a8c300758f336f2c192ba31db93d9dc1a12401810a6e6dcd30912c6c08140"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.778705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.939941 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" path="/var/lib/kubelet/pods/49bcd071-b172-4180-996d-a8494ce80ab7/volumes" Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.563639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"4837b05c7f14955b3fadbc1a6bb3a6669b78714341955303f28396fc19c04de6"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.576363 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"c1b94a0de5367741301d88181f6a32ce4effeeab43e55cc22517bb07d983c82c"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.576407 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"1ad7ae6160aaee4cfd37c4d02c6de3469c26afd562fb4491a8ca33ec92fca600"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.589432 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.597659 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.597637672 podStartE2EDuration="4.597637672s" podCreationTimestamp="2026-01-22 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:01.589909443 +0000 UTC m=+1161.001019372" watchObservedRunningTime="2026-01-22 14:03:01.597637672 +0000 UTC m=+1161.008747601" Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.604829 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"6f57ef2050aada2007225172f0c8fe10cb1bf865b0bf6cc5ac57c3ae05313025"} Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.637440 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.637424286 podStartE2EDuration="3.637424286s" podCreationTimestamp="2026-01-22 14:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:02.628879274 +0000 UTC m=+1162.039989213" watchObservedRunningTime="2026-01-22 14:03:02.637424286 +0000 UTC m=+1162.048534215" Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.993101 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.000588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.617938 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618048 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" containerID="cri-o://cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618086 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" containerID="cri-o://8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618457 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618182 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" containerID="cri-o://d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618103 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" containerID="cri-o://043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.642018 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.479111228 podStartE2EDuration="6.641996485s" podCreationTimestamp="2026-01-22 14:02:57 +0000 UTC" firstStartedPulling="2026-01-22 14:02:58.31081818 +0000 UTC m=+1157.721928119" lastFinishedPulling="2026-01-22 14:03:02.473703447 +0000 UTC m=+1161.884813376" observedRunningTime="2026-01-22 14:03:03.638338756 +0000 UTC m=+1163.049448685" watchObservedRunningTime="2026-01-22 14:03:03.641996485 +0000 UTC m=+1163.053106414" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.324040 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472360 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472425 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472443 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472505 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472620 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.473216 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.473574 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.474065 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.478924 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts" (OuterVolumeSpecName: "scripts") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.479254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k" (OuterVolumeSpecName: "kube-api-access-5wn6k") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "kube-api-access-5wn6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.503924 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.548918 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575337 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575380 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575394 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575411 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575420 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.579079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data" (OuterVolumeSpecName: "config-data") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627815 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627852 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" exitCode=2 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627866 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627875 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627896 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627908 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627936 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628099 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.647722 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.676948 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.678005 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.682032 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.691831 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709246 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709573 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709587 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709599 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709606 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709619 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709627 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709642 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709648 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709700 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709706 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709718 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709726 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709737 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709743 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709753 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709758 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709964 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709976 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709991 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709999 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710009 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710017 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710287 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710305 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.711898 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.711989 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.762893 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.762997 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.781943 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.882639 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883071 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883107 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883200 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883224 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883320 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.898493 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" path="/var/lib/kubelet/pods/11b92673-89ea-4ef5-87f5-743e06fcb861/volumes" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.961493 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962032 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962109 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962134 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962397 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962423 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962439 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962716 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962737 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962750 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.963071 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963104 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963131 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963361 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963390 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963636 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963660 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964278 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964301 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964551 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964575 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964741 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964762 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964951 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964973 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965224 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965242 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965484 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965551 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965943 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965967 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966209 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966231 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966553 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966579 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966836 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985887 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985947 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985974 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986150 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.988967 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.989782 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:04.995718 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:04.996351 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.001415 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.013587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.031228 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.105120 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.110831 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.114812 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.115094 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.115270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hh9r6" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.129041 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.142757 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189173 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189414 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189454 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.248925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291348 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291723 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291770 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291828 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291855 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292016 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292050 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292560 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.295466 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs" (OuterVolumeSpecName: "logs") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.297844 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.299094 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.299376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.301542 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px" (OuterVolumeSpecName: "kube-api-access-pv2px") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "kube-api-access-pv2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.304074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.311454 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.321243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts" (OuterVolumeSpecName: "scripts") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.325454 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data" (OuterVolumeSpecName: "config-data") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.326552 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.358614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394302 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394340 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394350 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394359 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394367 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394376 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394386 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.456715 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653093 4769 generic.go:334] "Generic (PLEG): container finished" podID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" exitCode=137 Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653405 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"a21b69f798a23fdcfdfb92adcc62b30839c1be6a1c5c04d00a869ead5ddc22a7"} Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653473 4769 scope.go:117] "RemoveContainer" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653494 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.694598 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.704242 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.716477 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.838115 4769 scope.go:117] "RemoveContainer" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: W0122 14:03:05.853118 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a08cae6_6172_4bb5_9145_4bd967ff8652.slice/crio-7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3 WatchSource:0}: Error finding container 7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3: Status 404 returned error can't find the container with id 7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3 Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.908994 4769 scope.go:117] "RemoveContainer" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: E0122 14:03:05.910472 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": container with ID starting with dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a not found: ID does not exist" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910511 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} err="failed to get container status \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": rpc error: code = NotFound desc = could not find container \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": container with ID starting with dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a not found: ID does not exist" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910530 4769 scope.go:117] "RemoveContainer" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: E0122 14:03:05.910903 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": container with ID starting with b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79 not found: ID does not exist" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910930 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} err="failed to get container status \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": rpc error: code = NotFound desc = could not find container \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": container with ID starting with b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79 not found: ID does not exist" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.957219 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.664649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerStarted","Data":"1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.665941 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.665963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.898237 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" path="/var/lib/kubelet/pods/aa581bf8-802c-4c64-80fe-83a1baf50a6e/volumes" Jan 22 14:03:07 crc kubenswrapper[4769]: I0122 14:03:07.687664 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817"} Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.097184 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.097268 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.145715 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.145840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705351 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e"} Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705398 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705563 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.738686 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17"} Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.739105 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.765953 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.645435591 podStartE2EDuration="5.765935761s" podCreationTimestamp="2026-01-22 14:03:04 +0000 UTC" firstStartedPulling="2026-01-22 14:03:05.856152982 +0000 UTC m=+1165.267262911" lastFinishedPulling="2026-01-22 14:03:08.976653152 +0000 UTC m=+1168.387763081" observedRunningTime="2026-01-22 14:03:09.761673396 +0000 UTC m=+1169.172783325" watchObservedRunningTime="2026-01-22 14:03:09.765935761 +0000 UTC m=+1169.177045690" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.201098 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.201160 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.246287 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.275236 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481607 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481705 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481780 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.482652 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.482721 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" gracePeriod=600 Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759723 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" exitCode=0 Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759882 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759949 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759964 4769 scope.go:117] "RemoveContainer" containerID="ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759968 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.760777 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.760887 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.917439 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.923746 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.089653 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769066 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" containerID="cri-o://d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769114 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" containerID="cri-o://0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769141 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" containerID="cri-o://146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769177 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" containerID="cri-o://cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" gracePeriod=30 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803692 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" exitCode=0 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803736 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" exitCode=2 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803746 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" exitCode=0 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.985304 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.985745 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:13 crc kubenswrapper[4769]: I0122 14:03:13.102505 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.831148 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerStarted","Data":"18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090"} Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.835179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.856585 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hql94" podStartSLOduration=1.277467406 podStartE2EDuration="10.856563317s" podCreationTimestamp="2026-01-22 14:03:05 +0000 UTC" firstStartedPulling="2026-01-22 14:03:05.968539749 +0000 UTC m=+1165.379649678" lastFinishedPulling="2026-01-22 14:03:15.54763566 +0000 UTC m=+1174.958745589" observedRunningTime="2026-01-22 14:03:15.849532756 +0000 UTC m=+1175.260642695" watchObservedRunningTime="2026-01-22 14:03:15.856563317 +0000 UTC m=+1175.267673246" Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.850972 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" exitCode=0 Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.852500 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803"} Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.977196 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121527 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121604 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121713 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.122288 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123234 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123560 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123671 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123743 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.125040 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.125066 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.138093 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq" (OuterVolumeSpecName: "kube-api-access-rfbwq") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "kube-api-access-rfbwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.138108 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts" (OuterVolumeSpecName: "scripts") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.164847 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.218046 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226693 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226736 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226750 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226762 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.240589 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data" (OuterVolumeSpecName: "config-data") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.328361 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865227 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3"} Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865285 4769 scope.go:117] "RemoveContainer" containerID="146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865443 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.939580 4769 scope.go:117] "RemoveContainer" containerID="0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.942597 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.951347 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.960883 4769 scope.go:117] "RemoveContainer" containerID="cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.004889 4769 scope.go:117] "RemoveContainer" containerID="d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.013698 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014167 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014191 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014202 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014209 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014228 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014234 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014254 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014261 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014275 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014282 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014290 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014295 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014449 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014460 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014476 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014487 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014495 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014505 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.016233 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.020254 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.020538 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.039267 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143556 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143614 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143677 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245625 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245730 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245958 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.247823 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.247826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.253378 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.253395 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.254098 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.254346 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.265831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.334091 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.782545 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: W0122 14:03:18.789005 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da17df6_1c4c_453a_9943_4a44e8a14664.slice/crio-63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0 WatchSource:0}: Error finding container 63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0: Status 404 returned error can't find the container with id 63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0 Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.877367 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0"} Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.893245 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" path="/var/lib/kubelet/pods/0a08cae6-6172-4bb5-9145-4bd967ff8652/volumes" Jan 22 14:03:19 crc kubenswrapper[4769]: I0122 14:03:19.887947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} Jan 22 14:03:23 crc kubenswrapper[4769]: I0122 14:03:23.930865 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} Jan 22 14:03:24 crc kubenswrapper[4769]: I0122 14:03:24.941167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.963169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.965059 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.988701 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.898638312 podStartE2EDuration="9.988681102s" podCreationTimestamp="2026-01-22 14:03:17 +0000 UTC" firstStartedPulling="2026-01-22 14:03:18.791229995 +0000 UTC m=+1178.202339924" lastFinishedPulling="2026-01-22 14:03:25.881272785 +0000 UTC m=+1185.292382714" observedRunningTime="2026-01-22 14:03:26.986496673 +0000 UTC m=+1186.397606602" watchObservedRunningTime="2026-01-22 14:03:26.988681102 +0000 UTC m=+1186.399791021" Jan 22 14:03:27 crc kubenswrapper[4769]: I0122 14:03:27.972448 4769 generic.go:334] "Generic (PLEG): container finished" podID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerID="18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090" exitCode=0 Jan 22 14:03:27 crc kubenswrapper[4769]: I0122 14:03:27.972651 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerDied","Data":"18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090"} Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.310347 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478461 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478521 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478546 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478675 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.497185 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts" (OuterVolumeSpecName: "scripts") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.497282 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm" (OuterVolumeSpecName: "kube-api-access-rsjnm") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "kube-api-access-rsjnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.507593 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.517815 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data" (OuterVolumeSpecName: "config-data") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581016 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581071 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581084 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581096 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991485 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerDied","Data":"1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519"} Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991562 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991628 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095285 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: E0122 14:03:30.095656 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095671 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095863 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.096447 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.098283 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hh9r6" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.100185 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.114974 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192328 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.294718 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.295325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.296013 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.298679 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.298690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.323204 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.415022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.840969 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.999364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66c7ff68-1167-4dbe-8e53-40f378941703","Type":"ContainerStarted","Data":"a1838467705c040eed132bd26af467e185ad5b62ad067843c8fdb68816dba547"} Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.009264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66c7ff68-1167-4dbe-8e53-40f378941703","Type":"ContainerStarted","Data":"2b6a5c6e1d7554b7db842372acbbecfc1c2c021f82e87bb8ae526d0c7a33a714"} Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.009652 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.039308 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.039284859 podStartE2EDuration="2.039284859s" podCreationTimestamp="2026-01-22 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:32.030521972 +0000 UTC m=+1191.441631911" watchObservedRunningTime="2026-01-22 14:03:32.039284859 +0000 UTC m=+1191.450394808" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.443634 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.919329 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.924584 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.932898 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.933065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.934618 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986755 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986847 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986925 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088432 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088504 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088581 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088637 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.098301 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.098441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.122458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.127458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.164897 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.167306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.177933 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.178258 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.180089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.195723 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.201352 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.245705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.257678 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297137 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297214 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297303 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297330 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.346872 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.348675 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.354136 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.356557 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.368003 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.379883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.381316 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.400315 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401281 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401391 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401482 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401504 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401552 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401642 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401735 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401878 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401911 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.403200 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.404021 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.412350 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.412653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.417038 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.424477 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.425045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.435494 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.436441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.445110 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.472908 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506487 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506720 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506898 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507121 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.509175 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.509821 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.510052 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.511481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.513466 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.515565 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.529612 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.537107 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.539218 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.547380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.579160 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609370 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.625457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.630965 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.632298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.807680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.848033 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.914130 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.018391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.052880 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3137766d_8b45_47a0_a7ca_f1a3c381450d.slice/crio-0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938 WatchSource:0}: Error finding container 0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938: Status 404 returned error can't find the container with id 0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938 Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.212256 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerStarted","Data":"0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938"} Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.232632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.285223 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.383430 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.396438 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.397699 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.400353 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.400952 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.407160 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448370 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550342 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.554773 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.555261 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.559481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.570457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.654704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.699534 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1f2c596_25ff_4c08_9b23_b90aca949e45.slice/crio-8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754 WatchSource:0}: Error finding container 8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754: Status 404 returned error can't find the container with id 8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754 Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.716232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.742203 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.753854 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c060e2_5b33_4452_bc58_2ce6e9f865d4.slice/crio-d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a WatchSource:0}: Error finding container d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a: Status 404 returned error can't find the container with id d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.217238 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.223387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerStarted","Data":"d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.224821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerStarted","Data":"8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.233905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"7522f136416e24ddb1e2da868b4df82fccac17698bad3fc0cffb8764c95aa35e"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237230 4769 generic.go:334] "Generic (PLEG): container finished" podID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" exitCode=0 Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237280 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237340 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerStarted","Data":"07ff2a18726b3f734621e81451a91539db3bacf8cce99d939c1f38660bd71e0c"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.246323 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"3f6efd7484c8f82f7294e9fc3f2dedfa64a83c4e487c60f5f3d00b72dea2aeff"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.254973 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerStarted","Data":"7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.295132 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6vgx7" podStartSLOduration=3.295113053 podStartE2EDuration="3.295113053s" podCreationTimestamp="2026-01-22 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:43.287068836 +0000 UTC m=+1202.698178765" watchObservedRunningTime="2026-01-22 14:03:43.295113053 +0000 UTC m=+1202.706222982" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.272308 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerStarted","Data":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.274340 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.280478 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerStarted","Data":"b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.280538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerStarted","Data":"4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.348258 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" podStartSLOduration=2.348236136 podStartE2EDuration="2.348236136s" podCreationTimestamp="2026-01-22 14:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:44.32623331 +0000 UTC m=+1203.737343249" watchObservedRunningTime="2026-01-22 14:03:44.348236136 +0000 UTC m=+1203.759346065" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.352267 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" podStartSLOduration=3.3522513050000002 podStartE2EDuration="3.352251305s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:44.302206387 +0000 UTC m=+1203.713316326" watchObservedRunningTime="2026-01-22 14:03:44.352251305 +0000 UTC m=+1203.763361234" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.648689 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.659889 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.330736 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerStarted","Data":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.332264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerStarted","Data":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.332379 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.334758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.334803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337766 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337818 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337893 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" containerID="cri-o://9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.338015 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" containerID="cri-o://c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.353667 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.902607257 podStartE2EDuration="6.353651896s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.759066609 +0000 UTC m=+1202.170176538" lastFinishedPulling="2026-01-22 14:03:46.210111248 +0000 UTC m=+1205.621221177" observedRunningTime="2026-01-22 14:03:47.350749278 +0000 UTC m=+1206.761859207" watchObservedRunningTime="2026-01-22 14:03:47.353651896 +0000 UTC m=+1206.764761825" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.380287 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.472941472 podStartE2EDuration="6.380263839s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.292205124 +0000 UTC m=+1201.703315053" lastFinishedPulling="2026-01-22 14:03:46.199527471 +0000 UTC m=+1205.610637420" observedRunningTime="2026-01-22 14:03:47.371078849 +0000 UTC m=+1206.782188788" watchObservedRunningTime="2026-01-22 14:03:47.380263839 +0000 UTC m=+1206.791373768" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.387560 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.467106224 podStartE2EDuration="6.387541206s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.28689889 +0000 UTC m=+1201.698008819" lastFinishedPulling="2026-01-22 14:03:46.207333872 +0000 UTC m=+1205.618443801" observedRunningTime="2026-01-22 14:03:47.385777719 +0000 UTC m=+1206.796887658" watchObservedRunningTime="2026-01-22 14:03:47.387541206 +0000 UTC m=+1206.798651135" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.413673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9156870010000002 podStartE2EDuration="6.413654095s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.701613649 +0000 UTC m=+1202.112723578" lastFinishedPulling="2026-01-22 14:03:46.199580743 +0000 UTC m=+1205.610690672" observedRunningTime="2026-01-22 14:03:47.407685114 +0000 UTC m=+1206.818795043" watchObservedRunningTime="2026-01-22 14:03:47.413654095 +0000 UTC m=+1206.824764024" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.301588 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.339110 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348835 4769 generic.go:334] "Generic (PLEG): container finished" podID="bba74422-5547-4700-919b-fd9707feaf8d" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" exitCode=0 Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348863 4769 generic.go:334] "Generic (PLEG): container finished" podID="bba74422-5547-4700-919b-fd9707feaf8d" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" exitCode=143 Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348907 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348923 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"3f6efd7484c8f82f7294e9fc3f2dedfa64a83c4e487c60f5f3d00b72dea2aeff"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348980 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.370129 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390184 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390282 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390309 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390427 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.394128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs" (OuterVolumeSpecName: "logs") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.396075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh" (OuterVolumeSpecName: "kube-api-access-46dlh") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "kube-api-access-46dlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.426943 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data" (OuterVolumeSpecName: "config-data") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.433545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479211 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.479727 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479763 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} err="failed to get container status \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479808 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.480174 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480197 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} err="failed to get container status \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480210 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480462 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} err="failed to get container status \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480481 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480741 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} err="failed to get container status \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492374 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492409 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492457 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492468 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.682196 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.692038 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.708981 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.709458 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709479 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.709499 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709508 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709729 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709760 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.710945 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.713188 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.715162 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.718616 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798492 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798850 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.896044 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba74422-5547-4700-919b-fd9707feaf8d" path="/var/lib/kubelet/pods/bba74422-5547-4700-919b-fd9707feaf8d/volumes" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900927 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900966 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.901007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.901178 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.905672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.905913 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.906708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.921480 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:49 crc kubenswrapper[4769]: I0122 14:03:49.038546 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:49 crc kubenswrapper[4769]: I0122 14:03:49.408029 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:49 crc kubenswrapper[4769]: W0122 14:03:49.416178 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a025db2_7758_45ec_a6dc_d5bbd07e339b.slice/crio-dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77 WatchSource:0}: Error finding container dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77: Status 404 returned error can't find the container with id dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77 Jan 22 14:03:50 crc kubenswrapper[4769]: I0122 14:03:50.399634 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} Jan 22 14:03:50 crc kubenswrapper[4769]: I0122 14:03:50.399975 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77"} Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.530902 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.531439 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.809967 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.850233 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.878219 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.878509 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" containerID="cri-o://fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" gracePeriod=10 Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.915004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.915048 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.971936 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424068 4769 generic.go:334] "Generic (PLEG): container finished" podID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerID="fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" exitCode=0 Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424537 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424567 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.427135 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.451922 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.451900258 podStartE2EDuration="4.451900258s" podCreationTimestamp="2026-01-22 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:52.448229258 +0000 UTC m=+1211.859339187" watchObservedRunningTime="2026-01-22 14:03:52.451900258 +0000 UTC m=+1211.863010197" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.478344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.501706 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.583814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584021 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584389 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584527 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.591511 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr" (OuterVolumeSpecName: "kube-api-access-lw4nr") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "kube-api-access-lw4nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.621136 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.621161 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.652194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config" (OuterVolumeSpecName: "config") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.661362 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.667670 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.675327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686655 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686692 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686705 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686715 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686725 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.706613 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.788769 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.436576 4769 generic.go:334] "Generic (PLEG): container finished" podID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerID="7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30" exitCode=0 Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.436673 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerDied","Data":"7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30"} Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.437144 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.474736 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.483526 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.675434 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.675659 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" containerID="cri-o://b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" gracePeriod=30 Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.039160 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.039518 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.189104 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.219328 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"6e7522e6-de75-492d-b445-a463f875e393\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.228096 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt" (OuterVolumeSpecName: "kube-api-access-9fdpt") pod "6e7522e6-de75-492d-b445-a463f875e393" (UID: "6e7522e6-de75-492d-b445-a463f875e393"). InnerVolumeSpecName "kube-api-access-9fdpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.322992 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.449698 4769 generic.go:334] "Generic (PLEG): container finished" podID="6e7522e6-de75-492d-b445-a463f875e393" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" exitCode=2 Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450886 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerDied","Data":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450917 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerDied","Data":"cb0f27b9c3686fd6437f8bd8519d2239c1ac22e630bed57eba5dc3bb400528c4"} Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450934 4769 scope.go:117] "RemoveContainer" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.513849 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.527525 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.536234 4769 scope.go:117] "RemoveContainer" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.542290 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": container with ID starting with b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f not found: ID does not exist" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.542486 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} err="failed to get container status \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": rpc error: code = NotFound desc = could not find container \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": container with ID starting with b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f not found: ID does not exist" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.548676 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549354 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549423 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549498 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="init" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549550 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="init" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549638 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549690 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549937 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549999 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.550673 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.553041 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.554107 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.579223 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.634864 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.634921 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.635028 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.635127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737810 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737852 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.742993 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.752074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.761272 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.764255 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.857642 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.891490 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.919374 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7522e6-de75-492d-b445-a463f875e393" path="/var/lib/kubelet/pods/6e7522e6-de75-492d-b445-a463f875e393/volumes" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.919962 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" path="/var/lib/kubelet/pods/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4/volumes" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943450 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943985 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.944304 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.949777 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x" (OuterVolumeSpecName: "kube-api-access-qpn9x") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "kube-api-access-qpn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.949947 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts" (OuterVolumeSpecName: "scripts") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.979270 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.985410 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data" (OuterVolumeSpecName: "config-data") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.046650 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047003 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047016 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047029 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.386769 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: W0122 14:03:55.392244 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27867d6f_28eb_45b6_afd4_9ad9da5a5a0f.slice/crio-ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1 WatchSource:0}: Error finding container ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1: Status 404 returned error can't find the container with id ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.459114 4769 generic.go:334] "Generic (PLEG): container finished" podID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerID="b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03" exitCode=0 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.459193 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerDied","Data":"b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463218 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463223 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerDied","Data":"0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463545 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.464905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f","Type":"ContainerStarted","Data":"ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.587277 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.587493 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" containerID="cri-o://936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.604765 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.605122 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" containerID="cri-o://6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.605195 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" containerID="cri-o://05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.618055 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.978775 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979386 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" containerID="cri-o://15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979497 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" containerID="cri-o://b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979464 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" containerID="cri-o://18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979633 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" containerID="cri-o://0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.479844 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerID="6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" exitCode=143 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.479908 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487025 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" exitCode=0 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487065 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" exitCode=2 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487084 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.488881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f","Type":"ContainerStarted","Data":"1336d5463792b849ea5857a986cf5130df43494f713c418bbe274849cf16ec71"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.489177 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" containerID="cri-o://8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.489243 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" containerID="cri-o://e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.537262 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.188636513 podStartE2EDuration="2.537235878s" podCreationTimestamp="2026-01-22 14:03:54 +0000 UTC" firstStartedPulling="2026-01-22 14:03:55.395047446 +0000 UTC m=+1214.806157375" lastFinishedPulling="2026-01-22 14:03:55.743646811 +0000 UTC m=+1215.154756740" observedRunningTime="2026-01-22 14:03:56.515719074 +0000 UTC m=+1215.926829013" watchObservedRunningTime="2026-01-22 14:03:56.537235878 +0000 UTC m=+1215.948345807" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.901475 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.921064 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.925250 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.938144 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.938213 4769 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.950903 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998036 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998080 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998108 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.008020 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5" (OuterVolumeSpecName: "kube-api-access-pggb5") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "kube-api-access-pggb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.008356 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5" (OuterVolumeSpecName: "kube-api-access-9nlr5") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "kube-api-access-9nlr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.030430 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data" (OuterVolumeSpecName: "config-data") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.086670 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099316 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099369 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099475 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099542 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100037 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100063 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100076 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.107235 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts" (OuterVolumeSpecName: "scripts") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.140061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb" (OuterVolumeSpecName: "kube-api-access-2gqfb") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "kube-api-access-2gqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.140432 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.144047 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.165254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data" (OuterVolumeSpecName: "config-data") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.183167 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.200996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201120 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201150 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201874 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201901 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201915 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201928 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201940 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201953 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.202136 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs" (OuterVolumeSpecName: "logs") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.232990 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data" (OuterVolumeSpecName: "config-data") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.235126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304160 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304209 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304220 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500716 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerDied","Data":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerDied","Data":"d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500809 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500833 4769 scope.go:117] "RemoveContainer" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.504812 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.504873 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerDied","Data":"4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509121 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509200 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519696 4769 generic.go:334] "Generic (PLEG): container finished" podID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519735 4769 generic.go:334] "Generic (PLEG): container finished" podID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" exitCode=143 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519760 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519935 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.520514 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.542918 4769 scope.go:117] "RemoveContainer" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.543359 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": container with ID starting with 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 not found: ID does not exist" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.543386 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} err="failed to get container status \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": rpc error: code = NotFound desc = could not find container \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": container with ID starting with 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.543405 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.550219 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.565190 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.577884 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.585683 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.587383 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588014 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588089 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588112 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588121 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588144 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588153 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588171 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588179 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588196 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588205 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588455 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588478 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588494 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588512 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588525 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.589461 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.596106 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.597698 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.607407 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.610076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.612577 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.623993 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.633894 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.636184 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.640361 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.640655 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.641916 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.652848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.655179 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.657474 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.657521 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} err="failed to get container status \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.657550 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.658014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658051 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} err="failed to get container status \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658080 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658364 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} err="failed to get container status \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658397 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658766 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} err="failed to get container status \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714538 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714607 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714648 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714801 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714873 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816810 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817254 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817560 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817847 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821415 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.822042 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.832598 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.833353 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919579 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919654 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919722 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.920151 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.920236 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.923692 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.924229 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.924729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.936417 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.953747 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.974057 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.371511 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.516239 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.524144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.530559 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerStarted","Data":"572df80009e2badcb09d845c35585498e31a50e4449686f5a44d8ee1e3d26270"} Jan 22 14:03:58 crc kubenswrapper[4769]: W0122 14:03:58.539359 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7cab01_0731_4a76_a6d5_b6d0905b2386.slice/crio-8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855 WatchSource:0}: Error finding container 8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855: Status 404 returned error can't find the container with id 8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855 Jan 22 14:03:58 crc kubenswrapper[4769]: W0122 14:03:58.541605 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode291c368_66b3_42b3_ad52_e3cd93471116.slice/crio-24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0 WatchSource:0}: Error finding container 24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0: Status 404 returned error can't find the container with id 24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0 Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.900169 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" path="/var/lib/kubelet/pods/7a025db2-7758-45ec-a6dc-d5bbd07e339b/volumes" Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.900855 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" path="/var/lib/kubelet/pods/c9c060e2-5b33-4452-bc58-2ce6e9f865d4/volumes" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e291c368-66b3-42b3-ad52-e3cd93471116","Type":"ContainerStarted","Data":"b72fd79a23896da108be81c426ccddd24e1e3a48d1f49aceeabe6aea1b1d092e"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e291c368-66b3-42b3-ad52-e3cd93471116","Type":"ContainerStarted","Data":"24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545676 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.547240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerStarted","Data":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.549606 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerID="05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" exitCode=0 Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.549670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551406 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.566392 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5663753209999998 podStartE2EDuration="2.566375321s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.562480966 +0000 UTC m=+1218.973590905" watchObservedRunningTime="2026-01-22 14:03:59.566375321 +0000 UTC m=+1218.977485250" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.596771 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.596752736 podStartE2EDuration="2.596752736s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.585280484 +0000 UTC m=+1218.996390413" watchObservedRunningTime="2026-01-22 14:03:59.596752736 +0000 UTC m=+1219.007862665" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.690605 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.719131 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.719114189 podStartE2EDuration="2.719114189s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.610638733 +0000 UTC m=+1219.021748662" watchObservedRunningTime="2026-01-22 14:03:59.719114189 +0000 UTC m=+1219.130224118" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.862911 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863314 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863470 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863810 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs" (OuterVolumeSpecName: "logs") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.864072 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.868695 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl" (OuterVolumeSpecName: "kube-api-access-b4xgl") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "kube-api-access-b4xgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.889906 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data" (OuterVolumeSpecName: "config-data") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.903985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967385 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967445 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967459 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.561109 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.561313 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"7522f136416e24ddb1e2da868b4df82fccac17698bad3fc0cffb8764c95aa35e"} Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.562036 4769 scope.go:117] "RemoveContainer" containerID="05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.597369 4769 scope.go:117] "RemoveContainer" containerID="6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.629221 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.663572 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.676914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: E0122 14:04:00.677453 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677477 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: E0122 14:04:00.677500 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677507 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677684 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677704 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.678654 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.681204 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.688185 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.787683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.789896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.789936 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.790078 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896329 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.897989 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.905968 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" path="/var/lib/kubelet/pods/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d/volumes" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.909540 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.924109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.931665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.020068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.133914 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203499 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203583 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203634 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203725 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203912 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.204971 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.205156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.208123 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq" (OuterVolumeSpecName: "kube-api-access-rqvxq") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "kube-api-access-rqvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.210721 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts" (OuterVolumeSpecName: "scripts") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.239872 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.305776 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310495 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310528 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310542 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310553 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310564 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310573 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.312865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data" (OuterVolumeSpecName: "config-data") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.412443 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.489208 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.572160 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"35d5b0508fa43c69ed0a25708ff2e8f1c73a876bc675cab299797220908d7f38"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578912 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" exitCode=0 Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578961 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578994 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.579016 4769 scope.go:117] "RemoveContainer" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.579157 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.613811 4769 scope.go:117] "RemoveContainer" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.620937 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.633961 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.643678 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644289 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644316 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644331 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644338 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644368 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644377 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644388 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644395 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644556 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644572 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644589 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644603 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.646290 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650462 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650563 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.686802 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720590 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720696 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720956 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720999 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721022 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721072 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721154 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.799624 4769 scope.go:117] "RemoveContainer" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822095 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822142 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822247 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822378 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.823363 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.823687 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.828438 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.829083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.830032 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.838977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.839691 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.842230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.857985 4769 scope.go:117] "RemoveContainer" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892317 4769 scope.go:117] "RemoveContainer" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.892742 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": container with ID starting with 0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533 not found: ID does not exist" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892776 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} err="failed to get container status \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": rpc error: code = NotFound desc = could not find container \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": container with ID starting with 0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892830 4769 scope.go:117] "RemoveContainer" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893117 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": container with ID starting with 18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947 not found: ID does not exist" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893142 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} err="failed to get container status \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": rpc error: code = NotFound desc = could not find container \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": container with ID starting with 18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893156 4769 scope.go:117] "RemoveContainer" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893330 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": container with ID starting with b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5 not found: ID does not exist" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893347 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} err="failed to get container status \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": rpc error: code = NotFound desc = could not find container \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": container with ID starting with b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893361 4769 scope.go:117] "RemoveContainer" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893550 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": container with ID starting with 15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46 not found: ID does not exist" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893567 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} err="failed to get container status \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": rpc error: code = NotFound desc = could not find container \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": container with ID starting with 15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46 not found: ID does not exist" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.121924 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.553096 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:02 crc kubenswrapper[4769]: W0122 14:04:02.557217 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902ed28_5882_448c_b405_0e73826dc0c4.slice/crio-f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970 WatchSource:0}: Error finding container f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970: Status 404 returned error can't find the container with id f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970 Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.589117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.589170 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.591854 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.616332 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.61631101 podStartE2EDuration="2.61631101s" podCreationTimestamp="2026-01-22 14:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:02.612251989 +0000 UTC m=+1222.023361928" watchObservedRunningTime="2026-01-22 14:04:02.61631101 +0000 UTC m=+1222.027420939" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.900273 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" path="/var/lib/kubelet/pods/2da17df6-1c4c-453a-9943-4a44e8a14664/volumes" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.920846 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.974738 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.974843 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:03 crc kubenswrapper[4769]: I0122 14:04:03.603817 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} Jan 22 14:04:04 crc kubenswrapper[4769]: I0122 14:04:04.614124 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} Jan 22 14:04:04 crc kubenswrapper[4769]: I0122 14:04:04.905646 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 14:04:05 crc kubenswrapper[4769]: I0122 14:04:05.624892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.642490 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.643034 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.665675 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.489903928 podStartE2EDuration="5.665652051s" podCreationTimestamp="2026-01-22 14:04:01 +0000 UTC" firstStartedPulling="2026-01-22 14:04:02.559317912 +0000 UTC m=+1221.970427841" lastFinishedPulling="2026-01-22 14:04:05.735066035 +0000 UTC m=+1225.146175964" observedRunningTime="2026-01-22 14:04:06.663187344 +0000 UTC m=+1226.074297273" watchObservedRunningTime="2026-01-22 14:04:06.665652051 +0000 UTC m=+1226.076761980" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.920753 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.950703 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.975403 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.976930 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.002511 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.707040 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.984980 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.984980 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:11 crc kubenswrapper[4769]: I0122 14:04:11.022001 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:11 crc kubenswrapper[4769]: I0122 14:04:11.022571 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:12 crc kubenswrapper[4769]: I0122 14:04:12.104011 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:12 crc kubenswrapper[4769]: I0122 14:04:12.104054 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.738414 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753513 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" exitCode=137 Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753727 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerDied","Data":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerDied","Data":"8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754"} Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753929 4769 scope.go:117] "RemoveContainer" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753998 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.779198 4769 scope.go:117] "RemoveContainer" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: E0122 14:04:17.779773 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": container with ID starting with 8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516 not found: ID does not exist" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.780222 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} err="failed to get container status \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": rpc error: code = NotFound desc = could not find container \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": container with ID starting with 8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516 not found: ID does not exist" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836012 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836195 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.842982 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt" (OuterVolumeSpecName: "kube-api-access-lbnbt") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "kube-api-access-lbnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.865146 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data" (OuterVolumeSpecName: "config-data") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.873723 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938783 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938834 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938848 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.980629 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.980703 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.987292 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.994436 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.092401 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.103191 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.158595 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: E0122 14:04:18.159142 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.159166 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.159449 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.160392 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.162482 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.163919 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.164784 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.184932 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.245586 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.245764 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246214 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.347914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348069 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348167 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348324 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.352118 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354545 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354773 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.380564 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.478398 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.897132 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" path="/var/lib/kubelet/pods/f1f2c596-25ff-4c08-9b23-b90aca949e45/volumes" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.911052 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.777142 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5697f97b-b5e1-4e54-aebb-540e12b7953c","Type":"ContainerStarted","Data":"481d01771636f93b7db8286bb4ce6448c8a9383a97aa209cbcd19cf2d2c579f7"} Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.777509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5697f97b-b5e1-4e54-aebb-540e12b7953c","Type":"ContainerStarted","Data":"18ee91c6bb3320634d3c484df9199d7d5d8c792104c4053b2eb75a866e163bfd"} Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.800006 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.7999820789999998 podStartE2EDuration="1.799982079s" podCreationTimestamp="2026-01-22 14:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:19.795058005 +0000 UTC m=+1239.206167934" watchObservedRunningTime="2026-01-22 14:04:19.799982079 +0000 UTC m=+1239.211092008" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.024539 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.025119 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.027696 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.028098 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.796511 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.800313 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.990459 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.992376 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.016378 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041656 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041773 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041929 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.042004 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.042048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144468 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144641 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.146855 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.146922 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.150663 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.150860 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.151007 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.151977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.154608 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.191680 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.335469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.859008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.479367 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.825172 4769 generic.go:334] "Generic (PLEG): container finished" podID="6862cbe8-3411-44fc-a4a8-429c3551f695" containerID="d15cdae013c4e526c860afdacd192eefc8491c63ed7c25b7d223d7e76a121a74" exitCode=0 Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.827387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerDied","Data":"d15cdae013c4e526c860afdacd192eefc8491c63ed7c25b7d223d7e76a121a74"} Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.827431 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerStarted","Data":"c02cf0d798ec3b1583d130341ef91b5b9df6cb6c8b83ff441852191458dde04b"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.387611 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388267 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" containerID="cri-o://33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388344 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" containerID="cri-o://a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388371 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" containerID="cri-o://95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388490 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" containerID="cri-o://5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.409429 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.488862 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836508 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" exitCode=0 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836535 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" exitCode=2 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836544 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" exitCode=0 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836582 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836624 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839668 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerStarted","Data":"e597c1032f3a38d027e48757274e85a8dd6060da78afc828fd2ba0d1b9fe0639"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839841 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" containerID="cri-o://037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839884 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" containerID="cri-o://fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.866722 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" podStartSLOduration=3.866701954 podStartE2EDuration="3.866701954s" podCreationTimestamp="2026-01-22 14:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:24.857831073 +0000 UTC m=+1244.268941012" watchObservedRunningTime="2026-01-22 14:04:24.866701954 +0000 UTC m=+1244.277811883" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.720363 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728839 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728874 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729378 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729424 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729464 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729666 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729715 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.730091 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.730118 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.749944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt" (OuterVolumeSpecName: "kube-api-access-tq2rt") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "kube-api-access-tq2rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.754102 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts" (OuterVolumeSpecName: "scripts") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.786243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.805907 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831211 4769 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831267 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831279 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831291 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.864927 4769 generic.go:334] "Generic (PLEG): container finished" podID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" exitCode=143 Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.865341 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868548 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" exitCode=0 Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868730 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868778 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868806 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868837 4769 scope.go:117] "RemoveContainer" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.869191 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.892126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.906136 4769 scope.go:117] "RemoveContainer" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.907412 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data" (OuterVolumeSpecName: "config-data") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.925617 4769 scope.go:117] "RemoveContainer" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.932839 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.932872 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.945204 4769 scope.go:117] "RemoveContainer" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963155 4769 scope.go:117] "RemoveContainer" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.963541 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": container with ID starting with 5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d not found: ID does not exist" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963574 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} err="failed to get container status \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": rpc error: code = NotFound desc = could not find container \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": container with ID starting with 5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963597 4769 scope.go:117] "RemoveContainer" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.963901 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": container with ID starting with a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67 not found: ID does not exist" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963942 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} err="failed to get container status \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": rpc error: code = NotFound desc = could not find container \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": container with ID starting with a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67 not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963973 4769 scope.go:117] "RemoveContainer" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.964359 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": container with ID starting with 95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1 not found: ID does not exist" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964384 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} err="failed to get container status \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": rpc error: code = NotFound desc = could not find container \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": container with ID starting with 95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1 not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964401 4769 scope.go:117] "RemoveContainer" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.964757 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": container with ID starting with 33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff not found: ID does not exist" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964777 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} err="failed to get container status \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": rpc error: code = NotFound desc = could not find container \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": container with ID starting with 33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff not found: ID does not exist" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.199491 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.208580 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225239 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.225812 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225885 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.225941 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225992 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.226059 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226110 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.226207 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226260 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226481 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226553 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226617 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226676 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.228371 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.231123 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.231627 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.233382 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236673 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236741 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236768 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.237016 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.249141 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.333491 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.334913 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-zdz55 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="9ac75153-4f8f-47c2-82c5-3239847b908a" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340047 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340168 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341475 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.345017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.350617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.351045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.352161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.354491 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.354672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.364263 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.883877 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.897034 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" path="/var/lib/kubelet/pods/f902ed28-5882-448c-b405-0e73826dc0c4/volumes" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.897607 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055240 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055320 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055382 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055555 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055604 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055677 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055701 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.057836 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.061319 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.070024 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55" (OuterVolumeSpecName: "kube-api-access-zdz55") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "kube-api-access-zdz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.071871 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.074984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.075104 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data" (OuterVolumeSpecName: "config-data") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.075911 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts" (OuterVolumeSpecName: "scripts") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.089697 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159340 4769 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159379 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159399 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159410 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159421 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159432 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159441 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159451 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.892128 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.957172 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.983162 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.003312 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.009179 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.011797 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.013366 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.013622 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.014109 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.183128 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184170 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184232 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184259 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184287 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184339 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184373 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184407 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.302951 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303021 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303193 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.307087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.307156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.311078 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.311366 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.312400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.313038 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.322279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.322866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.431510 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.479309 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.501079 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.513283 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712000 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712336 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712428 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712458 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.713146 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs" (OuterVolumeSpecName: "logs") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.719401 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw" (OuterVolumeSpecName: "kube-api-access-87qhw") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "kube-api-access-87qhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.751097 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.753013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data" (OuterVolumeSpecName: "config-data") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814521 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814564 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814578 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814590 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.900382 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac75153-4f8f-47c2-82c5-3239847b908a" path="/var/lib/kubelet/pods/9ac75153-4f8f-47c2-82c5-3239847b908a/volumes" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.907730 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: W0122 14:04:28.910769 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9fe083b_8f17_4c51_87ff_a8a7f447190d.slice/crio-c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5 WatchSource:0}: Error finding container c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5: Status 404 returned error can't find the container with id c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5 Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.915219 4769 generic.go:334] "Generic (PLEG): container finished" podID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" exitCode=0 Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916416 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"35d5b0508fa43c69ed0a25708ff2e8f1c73a876bc675cab299797220908d7f38"} Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916982 4769 scope.go:117] "RemoveContainer" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.944077 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.953524 4769 scope.go:117] "RemoveContainer" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.957230 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.976273 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.983847 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.984282 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984305 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.984320 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984328 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984541 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984562 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.985500 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.987155 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.987770 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.991558 4769 scope.go:117] "RemoveContainer" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992081 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.992239 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": container with ID starting with fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497 not found: ID does not exist" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992357 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} err="failed to get container status \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": rpc error: code = NotFound desc = could not find container \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": container with ID starting with fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497 not found: ID does not exist" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992475 4769 scope.go:117] "RemoveContainer" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.993518 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": container with ID starting with 037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f not found: ID does not exist" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.993659 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} err="failed to get container status \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": rpc error: code = NotFound desc = could not find container \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": container with ID starting with 037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f not found: ID does not exist" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.031854 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.121918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122015 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122108 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122517 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.174278 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.175564 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.177256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.178307 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.182575 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224470 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224515 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224538 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224617 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.228681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229246 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229829 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.232818 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.244160 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325883 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325946 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.326064 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.330385 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428938 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428984 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.432224 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.432747 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.434226 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.463765 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.496881 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.824269 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.930259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"2c2b5612a9fd6512e6cf8e192ab9515d44f14b5c4425fd825610c65da8dc8927"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.930569 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.934301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"b2ffc07def655a31961f7d5ac693137c0965a0d22c046824a655fd36ee880dad"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.967707 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.922070 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" path="/var/lib/kubelet/pods/c364fe67-27fa-404c-aef8-7c9daeda4c5b/volumes" Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.949554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"d550481eae244b0acb11940c894759b33a66e95371413ba92a66003adbc70c4b"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.951888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerStarted","Data":"8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.951948 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerStarted","Data":"633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.955875 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.955924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.995510 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9954872850000003 podStartE2EDuration="2.995487285s" podCreationTimestamp="2026-01-22 14:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:30.990227063 +0000 UTC m=+1250.401336992" watchObservedRunningTime="2026-01-22 14:04:30.995487285 +0000 UTC m=+1250.406597214" Jan 22 14:04:31 crc kubenswrapper[4769]: I0122 14:04:31.012469 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5j7zn" podStartSLOduration=2.012450206 podStartE2EDuration="2.012450206s" podCreationTimestamp="2026-01-22 14:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:31.009586349 +0000 UTC m=+1250.420696288" watchObservedRunningTime="2026-01-22 14:04:31.012450206 +0000 UTC m=+1250.423560135" Jan 22 14:04:31 crc kubenswrapper[4769]: I0122 14:04:31.978582 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"10a5117e729b092a3469b25a028bd64aa98c9e9204cca4f30a629651279581b9"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.336993 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.421705 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.421952 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" containerID="cri-o://097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" gracePeriod=10 Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.950107 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993384 4769 generic.go:334] "Generic (PLEG): container finished" podID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" exitCode=0 Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993464 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"07ff2a18726b3f734621e81451a91539db3bacf8cce99d939c1f38660bd71e0c"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993528 4769 scope.go:117] "RemoveContainer" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993694 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:32.998460 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"a90ef393236dbedf0a5581ef2530d218440f83d072d4ee775121bc524641d3eb"} Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:32.999487 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.017101 4769 scope.go:117] "RemoveContainer" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026203 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026253 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026294 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.032907 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.62366664 podStartE2EDuration="6.032891743s" podCreationTimestamp="2026-01-22 14:04:27 +0000 UTC" firstStartedPulling="2026-01-22 14:04:28.917412454 +0000 UTC m=+1248.328522383" lastFinishedPulling="2026-01-22 14:04:32.326637557 +0000 UTC m=+1251.737747486" observedRunningTime="2026-01-22 14:04:33.027147887 +0000 UTC m=+1252.438257816" watchObservedRunningTime="2026-01-22 14:04:33.032891743 +0000 UTC m=+1252.444001672" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.046860 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb" (OuterVolumeSpecName: "kube-api-access-8msgb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "kube-api-access-8msgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.056669 4769 scope.go:117] "RemoveContainer" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:33 crc kubenswrapper[4769]: E0122 14:04:33.057202 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": container with ID starting with 097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb not found: ID does not exist" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057238 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} err="failed to get container status \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": rpc error: code = NotFound desc = could not find container \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": container with ID starting with 097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb not found: ID does not exist" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057265 4769 scope.go:117] "RemoveContainer" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: E0122 14:04:33.057733 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": container with ID starting with 5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4 not found: ID does not exist" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057762 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4"} err="failed to get container status \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": rpc error: code = NotFound desc = could not find container \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": container with ID starting with 5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4 not found: ID does not exist" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.093330 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.093438 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128715 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128813 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128918 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129453 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129472 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129483 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.185157 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config" (OuterVolumeSpecName: "config") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.189075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.194079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230364 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230406 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230417 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.339677 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.351342 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:34 crc kubenswrapper[4769]: I0122 14:04:34.895087 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" path="/var/lib/kubelet/pods/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c/volumes" Jan 22 14:04:36 crc kubenswrapper[4769]: I0122 14:04:36.024030 4769 generic.go:334] "Generic (PLEG): container finished" podID="4b01ed3a-6c71-4384-80a2-59814d125061" containerID="8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe" exitCode=0 Jan 22 14:04:36 crc kubenswrapper[4769]: I0122 14:04:36.024189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerDied","Data":"8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe"} Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.346861 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505014 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505352 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.519338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq" (OuterVolumeSpecName: "kube-api-access-c2dqq") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "kube-api-access-c2dqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.525963 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts" (OuterVolumeSpecName: "scripts") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.536672 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.554002 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data" (OuterVolumeSpecName: "config-data") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608212 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608250 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608263 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608276 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.045978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerDied","Data":"633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d"} Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.046304 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.046224 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.205545 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.205846 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" containerID="cri-o://ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.206336 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" containerID="cri-o://5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.224528 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.224740 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" containerID="cri-o://e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268537 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268805 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" containerID="cri-o://5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268909 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" containerID="cri-o://c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.742959 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834291 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834656 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834915 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834940 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834994 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.836284 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs" (OuterVolumeSpecName: "logs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.839968 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96" (OuterVolumeSpecName: "kube-api-access-gjv96") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "kube-api-access-gjv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.862642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data" (OuterVolumeSpecName: "config-data") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.875325 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.885837 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.888522 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936811 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936853 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936863 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936882 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936890 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.057278 4769 generic.go:334] "Generic (PLEG): container finished" podID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" exitCode=143 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.057334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059337 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" exitCode=0 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059366 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" exitCode=143 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059384 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059398 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059409 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"b2ffc07def655a31961f7d5ac693137c0965a0d22c046824a655fd36ee880dad"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059441 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.083566 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.083979 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.092609 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.102084 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106468 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106884 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="init" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106902 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="init" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106917 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106924 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106937 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106943 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106969 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106974 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106983 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106990 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107148 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107157 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107175 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107181 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.107181 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107230 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} err="failed to get container status \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107256 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.107739 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107774 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} err="failed to get container status \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107815 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108122 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108122 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} err="failed to get container status \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108383 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108719 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} err="failed to get container status \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.112886 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.112912 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.113157 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.116144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241364 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241445 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241470 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.343547 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344654 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344978 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.345226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349328 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349428 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349766 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.350969 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.362938 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.423215 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.896708 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:40 crc kubenswrapper[4769]: I0122 14:04:40.071428 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"48002534c49abaab7671101ae0719c7c4c2022c7a6f39e05ab463a0a9e3f06b6"} Jan 22 14:04:40 crc kubenswrapper[4769]: I0122 14:04:40.898707 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" path="/var/lib/kubelet/pods/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397/volumes" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.084621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"bf6b3a13867858551c087c4bf5b47d3b9826f0aa7f5f9d104ae27cbd8c12b07d"} Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.084680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"f46d18a68195a78bfa28ce1e6222943f0ee6b2ba742339cb83532f82af95e816"} Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.108505 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.108488121 podStartE2EDuration="2.108488121s" podCreationTimestamp="2026-01-22 14:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:41.104442381 +0000 UTC m=+1260.515552330" watchObservedRunningTime="2026-01-22 14:04:41.108488121 +0000 UTC m=+1260.519598050" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.856049 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992678 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992833 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992875 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992940 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.994906 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs" (OuterVolumeSpecName: "logs") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.999523 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb" (OuterVolumeSpecName: "kube-api-access-n7psb") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "kube-api-access-n7psb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.024118 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.026026 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data" (OuterVolumeSpecName: "config-data") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.054834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095294 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095354 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095367 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095377 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097351 4769 generic.go:334] "Generic (PLEG): container finished" podID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" exitCode=0 Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097418 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097451 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855"} Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097519 4769 scope.go:117] "RemoveContainer" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.218905 4769 scope.go:117] "RemoveContainer" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.223416 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.237129 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.251636 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251662 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.251686 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251694 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251902 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251922 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.252981 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.257178 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.266419 4769 scope.go:117] "RemoveContainer" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.270972 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.270727 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": container with ID starting with c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c not found: ID does not exist" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.273108 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} err="failed to get container status \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": rpc error: code = NotFound desc = could not find container \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": container with ID starting with c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c not found: ID does not exist" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.273144 4769 scope.go:117] "RemoveContainer" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.274720 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": container with ID starting with 5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b not found: ID does not exist" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.274770 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} err="failed to get container status \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": rpc error: code = NotFound desc = could not find container \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": container with ID starting with 5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b not found: ID does not exist" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.280554 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.401998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402046 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402117 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504014 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504093 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504160 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504834 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.508651 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.508903 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.511858 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.522998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.581200 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.587639 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707156 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707285 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707355 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.711913 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f" (OuterVolumeSpecName: "kube-api-access-zps2f") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "kube-api-access-zps2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.736566 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.744946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data" (OuterVolumeSpecName: "config-data") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809609 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809660 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809673 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.900215 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" path="/var/lib/kubelet/pods/5c7cab01-0731-4a76-a6d5-b6d0905b2386/volumes" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.052683 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.110553 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"f09c359b5df8f768dc10964c4ad03b6a9f9bc2c52bacd9fde09bd9eddfd45708"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112341 4769 generic.go:334] "Generic (PLEG): container finished" podID="7875d554-e943-402f-b176-8644590e7926" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" exitCode=0 Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerDied","Data":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112406 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerDied","Data":"572df80009e2badcb09d845c35585498e31a50e4449686f5a44d8ee1e3d26270"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112420 4769 scope.go:117] "RemoveContainer" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112545 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.155097 4769 scope.go:117] "RemoveContainer" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: E0122 14:04:43.156935 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": container with ID starting with e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2 not found: ID does not exist" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.156991 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} err="failed to get container status \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": rpc error: code = NotFound desc = could not find container \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": container with ID starting with e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2 not found: ID does not exist" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.162950 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.172239 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.181739 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: E0122 14:04:43.182199 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.182222 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.182476 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.184714 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.187472 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.216875 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.317936 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.318006 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.318190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420460 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420751 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.424343 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.424360 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.438967 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.510036 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.048969 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:44 crc kubenswrapper[4769]: W0122 14:04:44.052198 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169a141c_dd3f_4efa_9b61_bb8df13bcd49.slice/crio-e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0 WatchSource:0}: Error finding container e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0: Status 404 returned error can't find the container with id e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0 Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.128836 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"169a141c-dd3f-4efa-9b61-bb8df13bcd49","Type":"ContainerStarted","Data":"e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.133065 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"9f1b725c403865900aba20ae4b6afc50bd6e84093c3bcf80cf680d36842cb58c"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.133113 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"469949ef4013c921b84065e6d0391347e0e95af7d3fecd4ae7d8f79ba75e3ad5"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.153452 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.153433483 podStartE2EDuration="2.153433483s" podCreationTimestamp="2026-01-22 14:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:44.153417592 +0000 UTC m=+1263.564527531" watchObservedRunningTime="2026-01-22 14:04:44.153433483 +0000 UTC m=+1263.564543412" Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.892890 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7875d554-e943-402f-b176-8644590e7926" path="/var/lib/kubelet/pods/7875d554-e943-402f-b176-8644590e7926/volumes" Jan 22 14:04:45 crc kubenswrapper[4769]: I0122 14:04:45.143824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"169a141c-dd3f-4efa-9b61-bb8df13bcd49","Type":"ContainerStarted","Data":"5304c7146ba479e17e1db2d0c708f85c69b17235905053819dbf50e6aec78505"} Jan 22 14:04:45 crc kubenswrapper[4769]: I0122 14:04:45.172208 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.172185933 podStartE2EDuration="2.172185933s" podCreationTimestamp="2026-01-22 14:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:45.1669361 +0000 UTC m=+1264.578046029" watchObservedRunningTime="2026-01-22 14:04:45.172185933 +0000 UTC m=+1264.583295862" Jan 22 14:04:47 crc kubenswrapper[4769]: I0122 14:04:47.581963 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:47 crc kubenswrapper[4769]: I0122 14:04:47.582303 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:48 crc kubenswrapper[4769]: I0122 14:04:48.510157 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:04:49 crc kubenswrapper[4769]: I0122 14:04:49.424277 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:49 crc kubenswrapper[4769]: I0122 14:04:49.425188 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:50 crc kubenswrapper[4769]: I0122 14:04:50.436121 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b103e0f8-85be-424c-a705-112fb70500b6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:50 crc kubenswrapper[4769]: I0122 14:04:50.436141 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b103e0f8-85be-424c-a705-112fb70500b6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:52 crc kubenswrapper[4769]: I0122 14:04:52.581589 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:52 crc kubenswrapper[4769]: I0122 14:04:52.582055 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.511202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.542031 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.596961 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a6fa05e3-584d-4c81-bef8-b5224b93fba3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.596971 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a6fa05e3-584d-4c81-bef8-b5224b93fba3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:54 crc kubenswrapper[4769]: I0122 14:04:54.256267 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:04:58 crc kubenswrapper[4769]: I0122 14:04:58.438924 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.432414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.432854 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.433732 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.438764 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:05:00 crc kubenswrapper[4769]: I0122 14:05:00.287294 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:05:00 crc kubenswrapper[4769]: I0122 14:05:00.294140 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.588067 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.589082 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.593812 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.596456 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:05:10 crc kubenswrapper[4769]: I0122 14:05:10.976867 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:11 crc kubenswrapper[4769]: I0122 14:05:11.872761 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:14 crc kubenswrapper[4769]: I0122 14:05:14.935876 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" containerID="cri-o://49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" gracePeriod=604797 Jan 22 14:05:15 crc kubenswrapper[4769]: I0122 14:05:15.891717 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" containerID="cri-o://401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" gracePeriod=604796 Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.466623 4769 generic.go:334] "Generic (PLEG): container finished" podID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerID="49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" exitCode=0 Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.466691 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0"} Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.554502 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737637 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737753 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737807 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737836 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737864 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737887 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737948 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737992 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738039 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738097 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739052 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739170 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739272 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.752708 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info" (OuterVolumeSpecName: "pod-info") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.753666 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc" (OuterVolumeSpecName: "kube-api-access-csgrc") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "kube-api-access-csgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.754784 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.756123 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.765364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.780583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data" (OuterVolumeSpecName: "config-data") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.797468 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf" (OuterVolumeSpecName: "server-conf") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840872 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840910 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840923 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840932 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840940 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840948 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840956 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840983 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840994 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.841003 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.852433 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.871687 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.943984 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.944481 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.065700 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.476769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.476955 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"6d72a769611a46bdb1768f4e9380f28bb2a07dc2061ec5bd95716855943febe1"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.477567 4769 scope.go:117] "RemoveContainer" containerID="49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481145 4769 generic.go:334] "Generic (PLEG): container finished" podID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerID="401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" exitCode=0 Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481235 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481246 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.498901 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.522742 4769 scope.go:117] "RemoveContainer" containerID="02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.528753 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.560088 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608364 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608859 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608880 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608900 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608907 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608929 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608935 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608962 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608968 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.609132 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.609148 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.610326 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.611991 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zm2vm" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.612935 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.613070 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.613449 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.615022 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.615206 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.622576 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.660963 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661132 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661168 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661343 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661399 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661466 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661498 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661550 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.665633 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.665695 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.669254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.670258 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info" (OuterVolumeSpecName: "pod-info") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681042 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681355 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681969 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.685678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.693155 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s" (OuterVolumeSpecName: "kube-api-access-kqp6s") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "kube-api-access-kqp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.699556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data" (OuterVolumeSpecName: "config-data") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.756150 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf" (OuterVolumeSpecName: "server-conf") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763767 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763828 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763876 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763988 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764080 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764181 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764200 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764211 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764233 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764243 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764254 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764265 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764276 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764286 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764296 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.794506 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.795639 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865590 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865640 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865664 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865687 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.866931 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.867104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.868205 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.868505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869405 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869845 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869857 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869950 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.870580 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.870712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.872975 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.873348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.873406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.883806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.898071 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" path="/var/lib/kubelet/pods/12de511c-514e-496c-9fbf-6d1e10db81fc/volumes" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.911586 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.931321 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.378810 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.493277 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.494640 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"f342f136d881af427f064d4b6f00d7a8af4922e009ad2acef9a4431fd2fce2a6"} Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.628075 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.638611 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.650838 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.652265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.654856 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.654962 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5c97b" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.655068 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.655658 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.656852 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.656904 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.657021 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.680845 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.789617 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790068 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790473 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790590 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790689 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893154 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893227 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893317 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893426 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893469 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893538 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894454 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894864 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.895265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.895456 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.896235 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.901300 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.901652 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.910040 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.911609 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.913831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.925774 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.971375 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:24 crc kubenswrapper[4769]: I0122 14:05:24.417476 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:24 crc kubenswrapper[4769]: W0122 14:05:24.515682 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd40f71_8afc_45fa_8a93_e784fb5f63c8.slice/crio-dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8 WatchSource:0}: Error finding container dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8: Status 404 returned error can't find the container with id dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8 Jan 22 14:05:24 crc kubenswrapper[4769]: I0122 14:05:24.900470 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" path="/var/lib/kubelet/pods/7b5386c6-ecca-4882-b692-80c4f5a194e7/volumes" Jan 22 14:05:25 crc kubenswrapper[4769]: I0122 14:05:25.521043 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626"} Jan 22 14:05:25 crc kubenswrapper[4769]: I0122 14:05:25.523459 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8"} Jan 22 14:05:26 crc kubenswrapper[4769]: I0122 14:05:26.533841 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36"} Jan 22 14:05:40 crc kubenswrapper[4769]: I0122 14:05:40.481901 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:05:40 crc kubenswrapper[4769]: I0122 14:05:40.482508 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:05:57 crc kubenswrapper[4769]: I0122 14:05:57.830328 4769 generic.go:334] "Generic (PLEG): container finished" podID="962e2340-5ed3-4560-b61b-4675432bac01" containerID="e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626" exitCode=0 Jan 22 14:05:57 crc kubenswrapper[4769]: I0122 14:05:57.830396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerDied","Data":"e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.840415 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"af186a92290f9236c6290610ca7c9388b55bbbafd3dfe2171977115f0e5758f3"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.840928 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.842366 4769 generic.go:334] "Generic (PLEG): container finished" podID="1fd40f71-8afc-45fa-8a93-e784fb5f63c8" containerID="35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36" exitCode=0 Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.842408 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerDied","Data":"35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.867698 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.867679845 podStartE2EDuration="36.867679845s" podCreationTimestamp="2026-01-22 14:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:05:58.864047537 +0000 UTC m=+1338.275157466" watchObservedRunningTime="2026-01-22 14:05:58.867679845 +0000 UTC m=+1338.278789774" Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.853555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"220729d2dc07aeae0f1cc83562efd9a4bb53bd0aa613024a1bdfce66661c2aef"} Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.854242 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.876142 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.876126083 podStartE2EDuration="36.876126083s" podCreationTimestamp="2026-01-22 14:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:05:59.871917359 +0000 UTC m=+1339.283027298" watchObservedRunningTime="2026-01-22 14:05:59.876126083 +0000 UTC m=+1339.287236002" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.019569 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.021775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.024896 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tbnjt"/"openshift-service-ca.crt" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.026739 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tbnjt"/"kube-root-ca.crt" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.026989 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tbnjt"/"default-dockercfg-klv5q" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.039137 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.109835 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.109955 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211449 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211513 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211973 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.233434 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.339111 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.870442 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.874849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"a7d09c897c4e58008d980c499629ff714b40edf727052df005ba245496e82e9c"} Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.887455 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:09 crc kubenswrapper[4769]: I0122 14:06:09.966648 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.482166 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.482242 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.978277 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b"} Jan 22 14:06:11 crc kubenswrapper[4769]: I0122 14:06:10.999928 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tbnjt/must-gather-nlc24" podStartSLOduration=3.194665982 podStartE2EDuration="10.999907075s" podCreationTimestamp="2026-01-22 14:06:00 +0000 UTC" firstStartedPulling="2026-01-22 14:06:01.870066818 +0000 UTC m=+1341.281176747" lastFinishedPulling="2026-01-22 14:06:09.675307911 +0000 UTC m=+1349.086417840" observedRunningTime="2026-01-22 14:06:10.994461947 +0000 UTC m=+1350.405571886" watchObservedRunningTime="2026-01-22 14:06:10.999907075 +0000 UTC m=+1350.411017004" Jan 22 14:06:12 crc kubenswrapper[4769]: I0122 14:06:12.935004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.550352 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.552495 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.659261 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.659344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.761527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.761612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.762061 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.779894 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.881907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.979052 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:06:14 crc kubenswrapper[4769]: I0122 14:06:14.019607 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerStarted","Data":"1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336"} Jan 22 14:06:26 crc kubenswrapper[4769]: I0122 14:06:26.160653 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerStarted","Data":"6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656"} Jan 22 14:06:26 crc kubenswrapper[4769]: I0122 14:06:26.186626 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" podStartSLOduration=1.536388214 podStartE2EDuration="13.186606284s" podCreationTimestamp="2026-01-22 14:06:13 +0000 UTC" firstStartedPulling="2026-01-22 14:06:13.93578464 +0000 UTC m=+1353.346894569" lastFinishedPulling="2026-01-22 14:06:25.58600271 +0000 UTC m=+1364.997112639" observedRunningTime="2026-01-22 14:06:26.177598401 +0000 UTC m=+1365.588708330" watchObservedRunningTime="2026-01-22 14:06:26.186606284 +0000 UTC m=+1365.597716213" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.482426 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.483109 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.483166 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.484029 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.484089 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" gracePeriod=600 Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288423 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" exitCode=0 Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288913 4769 scope.go:117] "RemoveContainer" containerID="53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.300931 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.303754 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerID="6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656" exitCode=0 Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.303820 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerDied","Data":"6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656"} Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.435963 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.478996 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.488065 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510253 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510339 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510486 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host" (OuterVolumeSpecName: "host") pod "c8467ba6-6bd4-4eaa-a313-94ad5c8db789" (UID: "c8467ba6-6bd4-4eaa-a313-94ad5c8db789"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510886 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.525131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp" (OuterVolumeSpecName: "kube-api-access-wkxtp") pod "c8467ba6-6bd4-4eaa-a313-94ad5c8db789" (UID: "c8467ba6-6bd4-4eaa-a313-94ad5c8db789"). InnerVolumeSpecName "kube-api-access-wkxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.612318 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.334275 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.334339 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:44 crc kubenswrapper[4769]: E0122 14:06:44.453942 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8467ba6_6bd4_4eaa_a313_94ad5c8db789.slice/crio-1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336\": RecentStats: unable to find data in memory cache]" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653369 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:44 crc kubenswrapper[4769]: E0122 14:06:44.653723 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653735 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653930 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.654491 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.829397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.829524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.893344 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" path="/var/lib/kubelet/pods/c8467ba6-6bd4-4eaa-a313-94ad5c8db789/volumes" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.931601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.931715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.932000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.954364 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.979775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:45 crc kubenswrapper[4769]: W0122 14:06:45.025088 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd739567_06f9_45a6_b424_6ff02babf529.slice/crio-16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f WatchSource:0}: Error finding container 16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f: Status 404 returned error can't find the container with id 16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348405 4769 generic.go:334] "Generic (PLEG): container finished" podID="bd739567-06f9-45a6-b424-6ff02babf529" containerID="11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8" exitCode=1 Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" event={"ID":"bd739567-06f9-45a6-b424-6ff02babf529","Type":"ContainerDied","Data":"11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8"} Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348485 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" event={"ID":"bd739567-06f9-45a6-b424-6ff02babf529","Type":"ContainerStarted","Data":"16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f"} Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.386650 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.395046 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.467988 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.571844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"bd739567-06f9-45a6-b424-6ff02babf529\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.571961 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"bd739567-06f9-45a6-b424-6ff02babf529\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.572109 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host" (OuterVolumeSpecName: "host") pod "bd739567-06f9-45a6-b424-6ff02babf529" (UID: "bd739567-06f9-45a6-b424-6ff02babf529"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.572923 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.577524 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj" (OuterVolumeSpecName: "kube-api-access-dxszj") pod "bd739567-06f9-45a6-b424-6ff02babf529" (UID: "bd739567-06f9-45a6-b424-6ff02babf529"). InnerVolumeSpecName "kube-api-access-dxszj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.675091 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.894867 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd739567-06f9-45a6-b424-6ff02babf529" path="/var/lib/kubelet/pods/bd739567-06f9-45a6-b424-6ff02babf529/volumes" Jan 22 14:06:47 crc kubenswrapper[4769]: I0122 14:06:47.378516 4769 scope.go:117] "RemoveContainer" containerID="11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8" Jan 22 14:06:47 crc kubenswrapper[4769]: I0122 14:06:47.378557 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:58 crc kubenswrapper[4769]: I0122 14:06:58.988258 4769 scope.go:117] "RemoveContainer" containerID="9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.012582 4769 scope.go:117] "RemoveContainer" containerID="ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.035760 4769 scope.go:117] "RemoveContainer" containerID="787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.090082 4769 scope.go:117] "RemoveContainer" containerID="401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.134283 4769 scope.go:117] "RemoveContainer" containerID="1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.158752 4769 scope.go:117] "RemoveContainer" containerID="cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f" Jan 22 14:07:15 crc kubenswrapper[4769]: I0122 14:07:15.758192 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-8bb3-account-create-update-x6jhs_ec90402f-c994-4710-b82f-5c8cc3f12fdf/mariadb-account-create-update/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.006799 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5765d95c66-48prv_95a5cf33-efc2-4ca4-93cf-c397436588cb/barbican-api/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.148492 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5765d95c66-48prv_95a5cf33-efc2-4ca4-93cf-c397436588cb/barbican-api-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.192778 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-create-5nx2t_3d72603e-a10a-4490-8298-67db64d087fc/mariadb-database-create/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.360004 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-sync-zzjpd_a7f766e1-262c-4861-a117-2454631e284f/barbican-db-sync/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.387179 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffc955cd-tlfq2_1ced7731-706e-49ab-8e05-af9f7dc7465a/barbican-keystone-listener/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.478239 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffc955cd-tlfq2_1ced7731-706e-49ab-8e05-af9f7dc7465a/barbican-keystone-listener-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.627986 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79fdf5695-77th5_2d271baa-4d4e-42f2-87ec-a0c8a7314560/barbican-worker-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.629212 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79fdf5695-77th5_2d271baa-4d4e-42f2-87ec-a0c8a7314560/barbican-worker/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.798514 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/ceilometer-central-agent/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.848060 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/ceilometer-notification-agent/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.890013 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/proxy-httpd/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.981645 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/sg-core/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.046498 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-8372-account-create-update-lq4fn_51e2f7fd-cd2e-4a84-b62a-27915d32469c/mariadb-account-create-update/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.188116 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66670ed-ef72-4a45-be6e-add4b5f52f94/cinder-api/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.263781 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66670ed-ef72-4a45-be6e-add4b5f52f94/cinder-api-log/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.325499 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-7r9tp_ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0/mariadb-database-create/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.466720 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-sync-l4hnw_3eb8819f-512d-43d8-a59e-1ba8e7e1fb06/cinder-db-sync/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.600212 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4552f275-d56c-4f3d-a8fd-7e5c4e2da02e/cinder-scheduler/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.629995 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4552f275-d56c-4f3d-a8fd-7e5c4e2da02e/probe/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.797631 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/init/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.943208 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/init/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.946681 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/dnsmasq-dns/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.018308 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-b906-account-create-update-rndmt_73fd3df5-6e83-4893-9368-66c1ba35155a/mariadb-account-create-update/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.147847 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-dxwjl_b909a789-674d-40ba-b332-700e27464966/mariadb-database-create/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.219236 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-t9sxw_b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299/glance-db-sync/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.369192 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6e1405ea-42cd-4345-b44a-8e72350a3357/glance-httpd/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.411811 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6e1405ea-42cd-4345-b44a-8e72350a3357/glance-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.568986 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_adf621f0-a198-4042-93a3-791ed71e1ee3/glance-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.596683 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_adf621f0-a198-4042-93a3-791ed71e1ee3/glance-httpd/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.742188 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cc4c8d8bd-69kmb_9a6a04bb-fa49-41f8-b75b-9c27873f8a1f/horizon/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.786495 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cc4c8d8bd-69kmb_9a6a04bb-fa49-41f8-b75b-9c27873f8a1f/horizon-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.834352 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-0c5f-account-create-update-dbzd4_bced8c79-d4b4-42dc-ba19-a4ba1eeb4387/mariadb-account-create-update/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.980016 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-nv6tp_4b938618-acdf-4f5f-8a04-daabc17cbb0c/keystone-bootstrap/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.108314 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d8d684bc6-pmxwh_ddb12191-d02d-4e79-82cd-d164ecaf2093/keystone-api/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.177173 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-mw8m7_8e5e1134-cb08-4676-b40b-5e05af038ec7/mariadb-database-create/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.294286 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-r7c9w_275c0c66-cbd1-4469-81f6-c33a1eab0ed6/keystone-db-sync/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.542119 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_27867d6f-28eb-45b6-afd4-9ad9da5a5a0f/kube-state-metrics/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.727645 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-24cb-account-create-update-rtdf4_cb68cb3e-c079-4e87-ae9b-be93a2b8b80e/mariadb-account-create-update/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.905623 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d6bcd56b9-2hx4m_a582ad75-7aa2-4ee6-9631-6726b7db9650/neutron-api/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.972681 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d6bcd56b9-2hx4m_a582ad75-7aa2-4ee6-9631-6726b7db9650/neutron-httpd/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.125662 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-892lk_ad0702a4-ee8a-45da-9cb7-40c2e4b257b9/mariadb-database-create/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.224381 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-rqjpw_f7c0ef06-5806-418c-8a10-81ea6afb0401/neutron-db-sync/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.465254 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b103e0f8-85be-424c-a705-112fb70500b6/nova-api-api/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.503260 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b103e0f8-85be-424c-a705-112fb70500b6/nova-api-log/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.510879 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-264d-account-create-update-4z8cb_fe68065a-9702-4440-a09a-2698d21ad5cc/mariadb-account-create-update/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.680480 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-tx7mp_288566dc-b78e-46e4-9bd3-c61bc9c2a6ce/mariadb-database-create/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.744284 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-49d8-account-create-update-gnbhc_b33b7a35-52b8-47c6-b5a7-5cf87d838927/mariadb-account-create-update/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.965145 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-6vgx7_3137766d-8b45-47a0-a7ca-f1a3c381450d/nova-manage/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.156363 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-hql94_4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf/nova-cell0-conductor-db-sync/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.165408 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_66c7ff68-1167-4dbe-8e53-40f378941703/nova-cell0-conductor-conductor/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.390389 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-5t26t_e45f7c9a-23a2-40fe-80dc-305f1fbc8e17/mariadb-database-create/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.405767 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-5j7zn_4b01ed3a-6c71-4384-80a2-59814d125061/nova-manage/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.724001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-cg5m6_60fa7062-c4e9-4700-88e1-af5262989c6f/nova-cell1-conductor-db-sync/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.728445 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e291c368-66b3-42b3-ad52-e3cd93471116/nova-cell1-conductor-conductor/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.911275 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-fllmn_ecb8a996-384c-4155-b45d-6a6335165545/mariadb-database-create/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.978491 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-ddb8-account-create-update-zm48k_cdcc2db5-9739-4e49-a6cc-3f7aff70f97d/mariadb-account-create-update/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.169961 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5697f97b-b5e1-4e54-aebb-540e12b7953c/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.374264 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a6fa05e3-584d-4c81-bef8-b5224b93fba3/nova-metadata-log/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.381584 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a6fa05e3-584d-4c81-bef8-b5224b93fba3/nova-metadata-metadata/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.538163 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_169a141c-dd3f-4efa-9b61-bb8df13bcd49/nova-scheduler-scheduler/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.596941 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.022168 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/galera/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.052027 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.069999 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.242978 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.256341 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/galera/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.305056 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a46459a9-7fab-439c-95fe-5d6cdcb16997/openstackclient/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.476587 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ljbrk_db7ce269-d7ec-4db1-aab3-b22da5d56c6e/ovn-controller/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.563715 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2ndkt_cbba9b5e-2f1d-4a3a-930e-c835070aefe9/openstack-network-exporter/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.710232 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server-init/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.904599 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server-init/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.935542 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovs-vswitchd/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.996048 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.157764 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_32d5b8f0-b7c1-4eeb-9b49-85b0240d28df/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.202721 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_32d5b8f0-b7c1-4eeb-9b49-85b0240d28df/ovn-northd/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.235315 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_760402cd-68ff-4d2e-a1ba-c54132e75c13/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.393527 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_760402cd-68ff-4d2e-a1ba-c54132e75c13/ovsdbserver-nb/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.500783 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a4e51d1-8dea-4f12-b7e9-7888f5672711/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.531179 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a4e51d1-8dea-4f12-b7e9-7888f5672711/ovsdbserver-sb/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.686968 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8cb8655d-vl7kp_8d4588b0-8c00-47bf-8b6d-cab4a5d792ab/placement-api/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.788946 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8cb8655d-vl7kp_8d4588b0-8c00-47bf-8b6d-cab4a5d792ab/placement-log/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.832298 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-a329-account-create-update-5dtjs_46ca4e3b-a376-4f54-88c0-75d4a912d489/mariadb-account-create-update/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.999219 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-7q976_257149e5-e0f3-4721-9329-6c119ce91192/mariadb-database-create/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.062065 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-bjdj8_a0e92228-1a9b-49fc-9dfd-0493f70f5ee8/placement-db-sync/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.227138 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.441328 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.487627 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.552360 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/rabbitmq/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.681835 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.685455 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/rabbitmq/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.771526 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-trlj5_4521e7ce-1245-4a18-9179-83a2b288e227/mariadb-account-create-update/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.955782 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-576cb8587-7cl26_75afafe2-c784-45fa-8104-1115c8921138/proxy-server/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.966139 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-576cb8587-7cl26_75afafe2-c784-45fa-8104-1115c8921138/proxy-httpd/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.152931 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jmhxf_f13b9a7b-6f5e-48fd-8d95-3beb851e9819/swift-ring-rebalance/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.225573 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.259500 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-reaper/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.418339 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-replicator/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.609842 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.609890 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-server/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.720386 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-replicator/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.800771 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-updater/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.817670 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-server/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.931715 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.997591 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-expirer/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.037438 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-server/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.057477 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-replicator/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.144477 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-updater/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.171468 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/rsync/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.282626 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/swift-recon-cron/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.299562 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3aa5525a-0eb2-487f-8721-3ef58f5df4aa/memcached/0.log" Jan 22 14:07:49 crc kubenswrapper[4769]: I0122 14:07:49.972862 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-54q5q_141f0476-23eb-4a43-a4ac-4d33c12bfb5b/manager/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.133486 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.334604 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.349804 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.356557 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.686685 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.764118 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.800377 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/extract/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.925507 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-2q2v2_bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.062945 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-rlcb9_c6b325d8-50c6-411a-bc7f-938b284f0efb/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.195035 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-wvxp8_ae11ee9d-5ccf-490d-b457-294820d6a337/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.279444 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-brq9d_d40b03ae-0991-4364-85f3-89cf5e8d5686/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.423223 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8rxgq_7d908338-dcdc-4423-b719-02d30f3834ed/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.687776 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-5njtw_c367fcfb-38d9-4834-970d-7004d16c8249/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.818825 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-zt4sd_13c33fdb-b388-4fdf-996c-544286f47a73/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.029268 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-f2klg_d8d08194-af60-4614-b425-1b45340cd73b/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.182705 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-ttb7f_3d8a97d6-e3bd-49e0-bc78-024286cce303/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.266001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w77v6_a32a1e6f-004c-4675-abed-10078b43492a/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.381158 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-x8dvt_ebd5834b-ef11-40bb-9d15-6878767e7bef/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.524893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-mwhh9_80a16478-da8a-4d2f-89df-163fada49abe/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.581774 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-fzz6p_8217a619-751c-4d07-a96c-ce3208f08e84/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.735065 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8542tcht_2b0a07de-4458-4970-a304-a608625bdebf/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.915132 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f94887bb5-8mc8h_a48b50b3-ad51-4268-a926-bf2f1d7fd3f6/operator/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.180256 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-m6xzn_a2d7498a-59be-42c8-913e-d8c8c596828f/registry-server/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.485266 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-prfwv_11299941-70c0-41a8-ad9c-5c4648c3aa95/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.554637 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ctf5z_f13c0d19-4c14-4897-bbc5-5c220d207e41/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.730673 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54d678f547-4dd5j_a2bbc43c-9feb-4287-9e35-6f100c6644f6/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.743922 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hv48h_14005034-1ce8-4d62-afbc-66cd1d0d9be1/operator/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.945865 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-jbtsm_d931ff7f-f554-4249-bc34-2cd09fc97427/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.061849 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-gwzt2_3c6369d9-2ecf-4187-bb10-76bde13ecd5d/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.331650 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pkl6g_ed1198a5-a7fa-4ab4-9656-8e9700deec37/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.372909 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-b2w8p_31021ae3-dbb7-4ceb-8737-31052d849f0a/manager/0.log" Jan 22 14:07:59 crc kubenswrapper[4769]: I0122 14:07:59.289678 4769 scope.go:117] "RemoveContainer" containerID="df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.204730 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pzj8w_db7a69ec-2a82-4f9b-b83a-42237a02087e/control-plane-machine-set-operator/0.log" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.367938 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-65brj_f4e58a9e-ecc8-43de-9518-0b014b2a27d2/kube-rbac-proxy/0.log" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.398966 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-65brj_f4e58a9e-ecc8-43de-9518-0b014b2a27d2/machine-api-operator/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.585338 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vn9qf_0390ceac-8902-475a-b739-ddc13392f828/cert-manager-controller/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.768208 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dzj2v_e3a1ec89-c852-4274-b95b-c070b9cf8c20/cert-manager-webhook/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.772963 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ptnxb_2bdf39e4-511e-4d06-a19a-7aa0cda68e94/cert-manager-cainjector/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.522783 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-t9pnx_bd1eaf1c-9da8-4372-888f-ed8464d4313d/nmstate-console-plugin/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.722216 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v6r9x_7e7ab7e8-7c34-4b26-9c19-33ae90a756ec/nmstate-handler/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.768939 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xsnfh_fd9c945e-a392-4a96-8a06-893a09e8dc19/kube-rbac-proxy/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.841903 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xsnfh_fd9c945e-a392-4a96-8a06-893a09e8dc19/nmstate-metrics/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.929923 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-z29kl_9342ab94-785a-427b-84d2-5ac6ff709531/nmstate-operator/0.log" Jan 22 14:08:38 crc kubenswrapper[4769]: I0122 14:08:38.100201 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-64j27_880459e4-297b-408b-8205-c2197bf19c18/nmstate-webhook/0.log" Jan 22 14:08:59 crc kubenswrapper[4769]: I0122 14:08:59.385689 4769 scope.go:117] "RemoveContainer" containerID="8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3" Jan 22 14:08:59 crc kubenswrapper[4769]: I0122 14:08:59.421308 4769 scope.go:117] "RemoveContainer" containerID="fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.744011 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qkpds_8fbbec23-1005-4364-bf82-8a646a24801a/kube-rbac-proxy/0.log" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.858941 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qkpds_8fbbec23-1005-4364-bf82-8a646a24801a/controller/0.log" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.953974 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.161533 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.161879 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.188659 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.194397 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.381091 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.385059 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.416621 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.427116 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.615578 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.635669 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/controller/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.640003 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.648368 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.822544 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/frr-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.878584 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/kube-rbac-proxy-frr/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.879667 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/kube-rbac-proxy/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.033002 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/reloader/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.082947 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-9n85j_82c00d20-0e87-4f34-9cae-d454867c62a0/frr-k8s-webhook-server/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.265304 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-ddb77dbc9-z2nv4_0e40742e-231f-4f7b-aa4b-fb58332c3dbe/manager/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.486699 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b46c7846-xbsl9_5ee84f81-0260-4579-b602-c37bcf5cc7aa/webhook-server/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.556135 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/frr/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.568181 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lwzgw_4762d945-0720-43a9-8af2-0317ce89dda2/kube-rbac-proxy/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.928704 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lwzgw_4762d945-0720-43a9-8af2-0317ce89dda2/speaker/0.log" Jan 22 14:09:10 crc kubenswrapper[4769]: I0122 14:09:10.481701 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:09:10 crc kubenswrapper[4769]: I0122 14:09:10.482135 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.500767 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.618838 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.642119 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.697042 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.892538 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.895687 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.909877 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/extract/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.067560 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.222402 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.247476 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.260407 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.412424 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.418544 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.465135 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/extract/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.594095 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.788323 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.804749 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.831476 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.043013 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.060304 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.159493 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/registry-server/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.230922 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.415973 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.443001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.462784 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.649747 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.686893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.937625 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.973605 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/registry-server/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.993205 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7vfmb_1cfacd8e-cbec-4f68-b90c-ede3a679e454/marketplace-operator/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.165558 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.178317 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.237440 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.255860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:23 crc kubenswrapper[4769]: E0122 14:09:23.256230 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.256247 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.256415 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.257721 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.271433 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290547 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290611 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.391941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392003 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392063 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392669 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392718 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.415575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.551082 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.555586 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.572771 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/registry-server/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.586692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.956536 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.123556 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.140915 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.151891 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.207500 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.488072 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.543742 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783627 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" exitCode=0 Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d"} Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783696 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerStarted","Data":"47a39392968bf77600e9b667b4562d27c8835d5c1d21bb61afc9f69211982fac"} Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.800963 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/registry-server/0.log" Jan 22 14:09:25 crc kubenswrapper[4769]: I0122 14:09:25.794106 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" exitCode=0 Jan 22 14:09:25 crc kubenswrapper[4769]: I0122 14:09:25.794309 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb"} Jan 22 14:09:26 crc kubenswrapper[4769]: I0122 14:09:26.815639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerStarted","Data":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} Jan 22 14:09:26 crc kubenswrapper[4769]: I0122 14:09:26.842529 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92j5p" podStartSLOduration=2.443930121 podStartE2EDuration="3.842475392s" podCreationTimestamp="2026-01-22 14:09:23 +0000 UTC" firstStartedPulling="2026-01-22 14:09:24.785865277 +0000 UTC m=+1544.196975206" lastFinishedPulling="2026-01-22 14:09:26.184410548 +0000 UTC m=+1545.595520477" observedRunningTime="2026-01-22 14:09:26.833961951 +0000 UTC m=+1546.245071890" watchObservedRunningTime="2026-01-22 14:09:26.842475392 +0000 UTC m=+1546.253585321" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.092301 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.093685 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.187936 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:35 crc kubenswrapper[4769]: I0122 14:09:35.151810 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:35 crc kubenswrapper[4769]: I0122 14:09:35.199881 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.116955 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92j5p" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="registry-server" containerID="cri-o://cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" gracePeriod=2 Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.609465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721252 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721617 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.722655 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities" (OuterVolumeSpecName: "utilities") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.737497 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w" (OuterVolumeSpecName: "kube-api-access-7846w") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "kube-api-access-7846w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.747153 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824114 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824406 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824483 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.126968 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" exitCode=0 Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127038 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127108 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"47a39392968bf77600e9b667b4562d27c8835d5c1d21bb61afc9f69211982fac"} Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127141 4769 scope.go:117] "RemoveContainer" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.152821 4769 scope.go:117] "RemoveContainer" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.170394 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.182829 4769 scope.go:117] "RemoveContainer" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.187235 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.228924 4769 scope.go:117] "RemoveContainer" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.229532 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": container with ID starting with cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c not found: ID does not exist" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.229575 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} err="failed to get container status \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": rpc error: code = NotFound desc = could not find container \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": container with ID starting with cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.229599 4769 scope.go:117] "RemoveContainer" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.230058 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": container with ID starting with bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb not found: ID does not exist" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.230080 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb"} err="failed to get container status \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": rpc error: code = NotFound desc = could not find container \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": container with ID starting with bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.230097 4769 scope.go:117] "RemoveContainer" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.232074 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": container with ID starting with 11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d not found: ID does not exist" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.232110 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d"} err="failed to get container status \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": rpc error: code = NotFound desc = could not find container \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": container with ID starting with 11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.898533 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" path="/var/lib/kubelet/pods/8d134a86-4a31-4784-b202-723a7c7f7249/volumes" Jan 22 14:09:40 crc kubenswrapper[4769]: I0122 14:09:40.481667 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:09:40 crc kubenswrapper[4769]: I0122 14:09:40.482246 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:09:46 crc kubenswrapper[4769]: E0122 14:09:46.308336 4769 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:33536->38.102.83.50:45103: write tcp 38.102.83.50:33536->38.102.83.50:45103: write: broken pipe Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482071 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482582 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482659 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.483517 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.483588 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" gracePeriod=600 Jan 22 14:10:10 crc kubenswrapper[4769]: E0122 14:10:10.611625 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438411 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" exitCode=0 Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438762 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438825 4769 scope.go:117] "RemoveContainer" containerID="b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.439541 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:11 crc kubenswrapper[4769]: E0122 14:10:11.439846 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:22 crc kubenswrapper[4769]: I0122 14:10:22.893925 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:22 crc kubenswrapper[4769]: E0122 14:10:22.895321 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:37 crc kubenswrapper[4769]: I0122 14:10:37.883900 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:37 crc kubenswrapper[4769]: E0122 14:10:37.885038 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.048955 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.057855 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.066497 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.074102 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.081948 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.089421 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.096555 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.104277 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.900652 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257149e5-e0f3-4721-9329-6c119ce91192" path="/var/lib/kubelet/pods/257149e5-e0f3-4721-9329-6c119ce91192/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.901679 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" path="/var/lib/kubelet/pods/46ca4e3b-a376-4f54-88c0-75d4a912d489/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.902307 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" path="/var/lib/kubelet/pods/8e5e1134-cb08-4676-b40b-5e05af038ec7/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.902923 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" path="/var/lib/kubelet/pods/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387/volumes" Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.042719 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.050961 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.059272 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.066536 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.902599 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" path="/var/lib/kubelet/pods/73fd3df5-6e83-4893-9368-66c1ba35155a/volumes" Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.905662 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b909a789-674d-40ba-b332-700e27464966" path="/var/lib/kubelet/pods/b909a789-674d-40ba-b332-700e27464966/volumes" Jan 22 14:10:50 crc kubenswrapper[4769]: I0122 14:10:50.891174 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:50 crc kubenswrapper[4769]: E0122 14:10:50.892105 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.932365 4769 generic.go:334] "Generic (PLEG): container finished" podID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" exitCode=0 Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.932407 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerDied","Data":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.933446 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:10:57 crc kubenswrapper[4769]: I0122 14:10:57.462234 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/gather/0.log" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.535629 4769 scope.go:117] "RemoveContainer" containerID="76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.561611 4769 scope.go:117] "RemoveContainer" containerID="41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.613833 4769 scope.go:117] "RemoveContainer" containerID="97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.648585 4769 scope.go:117] "RemoveContainer" containerID="8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.704376 4769 scope.go:117] "RemoveContainer" containerID="fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.727912 4769 scope.go:117] "RemoveContainer" containerID="c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312" Jan 22 14:11:03 crc kubenswrapper[4769]: I0122 14:11:03.032865 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:11:03 crc kubenswrapper[4769]: I0122 14:11:03.042363 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:11:04 crc kubenswrapper[4769]: I0122 14:11:04.885151 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:04 crc kubenswrapper[4769]: E0122 14:11:04.888334 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:04 crc kubenswrapper[4769]: I0122 14:11:04.899653 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" path="/var/lib/kubelet/pods/4521e7ce-1245-4a18-9179-83a2b288e227/volumes" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.180620 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.181302 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tbnjt/must-gather-nlc24" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="copy" containerID="cri-o://1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" gracePeriod=2 Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.189896 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.603744 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/copy/0.log" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.604518 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.654251 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.654321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.660288 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc" (OuterVolumeSpecName: "kube-api-access-v94jc") pod "7529a8b3-1901-4ac4-9cee-f3ece4581ea8" (UID: "7529a8b3-1901-4ac4-9cee-f3ece4581ea8"). InnerVolumeSpecName "kube-api-access-v94jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.756388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") on node \"crc\" DevicePath \"\"" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.831836 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7529a8b3-1901-4ac4-9cee-f3ece4581ea8" (UID: "7529a8b3-1901-4ac4-9cee-f3ece4581ea8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.858027 4769 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.020733 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/copy/0.log" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021275 4769 generic.go:334] "Generic (PLEG): container finished" podID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" exitCode=143 Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021346 4769 scope.go:117] "RemoveContainer" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021348 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.044080 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.130170 4769 scope.go:117] "RemoveContainer" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: E0122 14:11:06.130958 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": container with ID starting with 1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b not found: ID does not exist" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131010 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b"} err="failed to get container status \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": rpc error: code = NotFound desc = could not find container \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": container with ID starting with 1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b not found: ID does not exist" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131042 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: E0122 14:11:06.131574 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": container with ID starting with cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b not found: ID does not exist" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131605 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} err="failed to get container status \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": rpc error: code = NotFound desc = could not find container \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": container with ID starting with cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b not found: ID does not exist" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.895580 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" path="/var/lib/kubelet/pods/7529a8b3-1901-4ac4-9cee-f3ece4581ea8/volumes" Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.037382 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.047830 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.056203 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.062977 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.028145 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.038502 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.047780 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.056155 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.897158 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d72603e-a10a-4490-8298-67db64d087fc" path="/var/lib/kubelet/pods/3d72603e-a10a-4490-8298-67db64d087fc/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.898479 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" path="/var/lib/kubelet/pods/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.899323 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" path="/var/lib/kubelet/pods/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.900139 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" path="/var/lib/kubelet/pods/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e/volumes" Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.028176 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.035817 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.042951 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.050432 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:11:12 crc kubenswrapper[4769]: I0122 14:11:12.900482 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" path="/var/lib/kubelet/pods/51e2f7fd-cd2e-4a84-b62a-27915d32469c/volumes" Jan 22 14:11:12 crc kubenswrapper[4769]: I0122 14:11:12.902159 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" path="/var/lib/kubelet/pods/ec90402f-c994-4710-b82f-5c8cc3f12fdf/volumes" Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.048527 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.059095 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.900115 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" path="/var/lib/kubelet/pods/275c0c66-cbd1-4469-81f6-c33a1eab0ed6/volumes" Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.034263 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.043260 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.883515 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:18 crc kubenswrapper[4769]: E0122 14:11:18.884008 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.898672 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" path="/var/lib/kubelet/pods/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299/volumes" Jan 22 14:11:33 crc kubenswrapper[4769]: I0122 14:11:33.883504 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:33 crc kubenswrapper[4769]: E0122 14:11:33.884480 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:47 crc kubenswrapper[4769]: I0122 14:11:47.883850 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:47 crc kubenswrapper[4769]: E0122 14:11:47.884760 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:58 crc kubenswrapper[4769]: I0122 14:11:58.883693 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:58 crc kubenswrapper[4769]: E0122 14:11:58.899505 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.869581 4769 scope.go:117] "RemoveContainer" containerID="52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.901037 4769 scope.go:117] "RemoveContainer" containerID="9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.968232 4769 scope.go:117] "RemoveContainer" containerID="21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.024186 4769 scope.go:117] "RemoveContainer" containerID="3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.079527 4769 scope.go:117] "RemoveContainer" containerID="a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.113534 4769 scope.go:117] "RemoveContainer" containerID="77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.143376 4769 scope.go:117] "RemoveContainer" containerID="afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.163175 4769 scope.go:117] "RemoveContainer" containerID="61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.189769 4769 scope.go:117] "RemoveContainer" containerID="09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f" Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.061379 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.071919 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.085200 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.093500 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.895983 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" path="/var/lib/kubelet/pods/4b938618-acdf-4f5f-8a04-daabc17cbb0c/volumes" Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.896643 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" path="/var/lib/kubelet/pods/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8/volumes" Jan 22 14:12:13 crc kubenswrapper[4769]: I0122 14:12:13.884732 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:13 crc kubenswrapper[4769]: E0122 14:12:13.886147 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.050207 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.057282 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.898358 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" path="/var/lib/kubelet/pods/f7c0ef06-5806-418c-8a10-81ea6afb0401/volumes" Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.031970 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.039219 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.894186 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f766e1-262c-4861-a117-2454631e284f" path="/var/lib/kubelet/pods/a7f766e1-262c-4861-a117-2454631e284f/volumes" Jan 22 14:12:25 crc kubenswrapper[4769]: I0122 14:12:25.035720 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:12:25 crc kubenswrapper[4769]: I0122 14:12:25.048701 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:12:26 crc kubenswrapper[4769]: I0122 14:12:26.896212 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" path="/var/lib/kubelet/pods/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06/volumes" Jan 22 14:12:28 crc kubenswrapper[4769]: I0122 14:12:28.883429 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:28 crc kubenswrapper[4769]: E0122 14:12:28.884090 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:39 crc kubenswrapper[4769]: I0122 14:12:39.883329 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:39 crc kubenswrapper[4769]: E0122 14:12:39.884092 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:51 crc kubenswrapper[4769]: I0122 14:12:51.883825 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:51 crc kubenswrapper[4769]: E0122 14:12:51.884691 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.048228 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.055650 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.065918 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.074457 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.385891 4769 scope.go:117] "RemoveContainer" containerID="fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.423160 4769 scope.go:117] "RemoveContainer" containerID="7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.470770 4769 scope.go:117] "RemoveContainer" containerID="5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.564277 4769 scope.go:117] "RemoveContainer" containerID="6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.591675 4769 scope.go:117] "RemoveContainer" containerID="3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.666143 4769 scope.go:117] "RemoveContainer" containerID="4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.898084 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb8a996-384c-4155-b45d-6a6335165545" path="/var/lib/kubelet/pods/ecb8a996-384c-4155-b45d-6a6335165545/volumes" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.899218 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" path="/var/lib/kubelet/pods/fe68065a-9702-4440-a09a-2698d21ad5cc/volumes" Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.037046 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.052836 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.059967 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.067565 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.075083 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.081657 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.088228 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.094596 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.913657 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" path="/var/lib/kubelet/pods/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.915155 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" path="/var/lib/kubelet/pods/b33b7a35-52b8-47c6-b5a7-5cf87d838927/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.915942 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" path="/var/lib/kubelet/pods/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.916735 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" path="/var/lib/kubelet/pods/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17/volumes" Jan 22 14:13:03 crc kubenswrapper[4769]: I0122 14:13:03.883340 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:03 crc kubenswrapper[4769]: E0122 14:13:03.883627 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:17 crc kubenswrapper[4769]: I0122 14:13:17.883850 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:17 crc kubenswrapper[4769]: E0122 14:13:17.884628 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:29 crc kubenswrapper[4769]: I0122 14:13:29.884743 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:29 crc kubenswrapper[4769]: E0122 14:13:29.885583 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.046050 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.053900 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.894451 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" path="/var/lib/kubelet/pods/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf/volumes" Jan 22 14:13:42 crc kubenswrapper[4769]: I0122 14:13:42.884472 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:42 crc kubenswrapper[4769]: E0122 14:13:42.885748 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:55 crc kubenswrapper[4769]: I0122 14:13:55.047798 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:13:55 crc kubenswrapper[4769]: I0122 14:13:55.059745 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:13:56 crc kubenswrapper[4769]: I0122 14:13:56.884276 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:56 crc kubenswrapper[4769]: E0122 14:13:56.884847 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:56 crc kubenswrapper[4769]: I0122 14:13:56.895556 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" path="/var/lib/kubelet/pods/3137766d-8b45-47a0-a7ca-f1a3c381450d/volumes" Jan 22 14:13:57 crc kubenswrapper[4769]: I0122 14:13:57.048181 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:13:57 crc kubenswrapper[4769]: I0122 14:13:57.058883 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:13:58 crc kubenswrapper[4769]: I0122 14:13:58.899694 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" path="/var/lib/kubelet/pods/60fa7062-c4e9-4700-88e1-af5262989c6f/volumes" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.812986 4769 scope.go:117] "RemoveContainer" containerID="7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.855876 4769 scope.go:117] "RemoveContainer" containerID="35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.891635 4769 scope.go:117] "RemoveContainer" containerID="afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.951656 4769 scope.go:117] "RemoveContainer" containerID="98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.969981 4769 scope.go:117] "RemoveContainer" containerID="18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.037118 4769 scope.go:117] "RemoveContainer" containerID="5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.072513 4769 scope.go:117] "RemoveContainer" containerID="b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.105731 4769 scope.go:117] "RemoveContainer" containerID="be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.122853 4769 scope.go:117] "RemoveContainer" containerID="751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97" Jan 22 14:14:08 crc kubenswrapper[4769]: I0122 14:14:08.883825 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:14:08 crc kubenswrapper[4769]: E0122 14:14:08.884648 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:14:20 crc kubenswrapper[4769]: I0122 14:14:20.888779 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:14:20 crc kubenswrapper[4769]: E0122 14:14:20.889558 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:14:35 crc kubenswrapper[4769]: I0122 14:14:35.884778 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:14:35 crc kubenswrapper[4769]: E0122 14:14:35.885879 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:14:38 crc kubenswrapper[4769]: I0122 14:14:38.056619 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:14:38 crc kubenswrapper[4769]: I0122 14:14:38.064041 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:14:38 crc kubenswrapper[4769]: I0122 14:14:38.904584 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" path="/var/lib/kubelet/pods/4b01ed3a-6c71-4384-80a2-59814d125061/volumes" Jan 22 14:14:48 crc kubenswrapper[4769]: I0122 14:14:48.885669 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:14:48 crc kubenswrapper[4769]: E0122 14:14:48.886498 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.159897 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp"] Jan 22 14:15:00 crc kubenswrapper[4769]: E0122 14:15:00.161521 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="registry-server" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161544 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="registry-server" Jan 22 14:15:00 crc kubenswrapper[4769]: E0122 14:15:00.161567 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="gather" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161574 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="gather" Jan 22 14:15:00 crc kubenswrapper[4769]: E0122 14:15:00.161597 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="copy" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161606 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="copy" Jan 22 14:15:00 crc kubenswrapper[4769]: E0122 14:15:00.161631 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="extract-content" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161638 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="extract-content" Jan 22 14:15:00 crc kubenswrapper[4769]: E0122 14:15:00.161659 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="extract-utilities" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161666 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="extract-utilities" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161894 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="gather" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161907 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="copy" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.161929 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="registry-server" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.162997 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.165631 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.165945 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.181213 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp"] Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.239290 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.239692 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2j4b\" (UniqueName: \"kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.239754 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.341094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2j4b\" (UniqueName: \"kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.341204 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.341375 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.342388 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.353372 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.357352 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2j4b\" (UniqueName: \"kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b\") pod \"collect-profiles-29484855-6d7tp\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:00 crc kubenswrapper[4769]: I0122 14:15:00.483224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:01 crc kubenswrapper[4769]: I0122 14:15:01.272224 4769 scope.go:117] "RemoveContainer" containerID="8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe" Jan 22 14:15:01 crc kubenswrapper[4769]: I0122 14:15:01.434312 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp"] Jan 22 14:15:01 crc kubenswrapper[4769]: I0122 14:15:01.882940 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:15:01 crc kubenswrapper[4769]: E0122 14:15:01.883509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:15:02 crc kubenswrapper[4769]: I0122 14:15:02.054300 4769 generic.go:334] "Generic (PLEG): container finished" podID="d92b152a-d52b-485d-ac4d-1a1d7aeb2860" containerID="992637ce194fccd5a303fec905828cb2363a9eefa90fe85c77b972b7ffcfa2b2" exitCode=0 Jan 22 14:15:02 crc kubenswrapper[4769]: I0122 14:15:02.054455 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" event={"ID":"d92b152a-d52b-485d-ac4d-1a1d7aeb2860","Type":"ContainerDied","Data":"992637ce194fccd5a303fec905828cb2363a9eefa90fe85c77b972b7ffcfa2b2"} Jan 22 14:15:02 crc kubenswrapper[4769]: I0122 14:15:02.054750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" event={"ID":"d92b152a-d52b-485d-ac4d-1a1d7aeb2860","Type":"ContainerStarted","Data":"9cd2b5f156dae02c64f866098caf245e8a4aa2fa501ad5b80d6991511069a8ac"} Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.424301 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.602442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume\") pod \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.602555 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume\") pod \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.602588 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2j4b\" (UniqueName: \"kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b\") pod \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\" (UID: \"d92b152a-d52b-485d-ac4d-1a1d7aeb2860\") " Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.603370 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume" (OuterVolumeSpecName: "config-volume") pod "d92b152a-d52b-485d-ac4d-1a1d7aeb2860" (UID: "d92b152a-d52b-485d-ac4d-1a1d7aeb2860"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.608309 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b" (OuterVolumeSpecName: "kube-api-access-k2j4b") pod "d92b152a-d52b-485d-ac4d-1a1d7aeb2860" (UID: "d92b152a-d52b-485d-ac4d-1a1d7aeb2860"). InnerVolumeSpecName "kube-api-access-k2j4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.608903 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d92b152a-d52b-485d-ac4d-1a1d7aeb2860" (UID: "d92b152a-d52b-485d-ac4d-1a1d7aeb2860"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.704491 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.704545 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:03 crc kubenswrapper[4769]: I0122 14:15:03.704592 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2j4b\" (UniqueName: \"kubernetes.io/projected/d92b152a-d52b-485d-ac4d-1a1d7aeb2860-kube-api-access-k2j4b\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:04 crc kubenswrapper[4769]: I0122 14:15:04.074698 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" event={"ID":"d92b152a-d52b-485d-ac4d-1a1d7aeb2860","Type":"ContainerDied","Data":"9cd2b5f156dae02c64f866098caf245e8a4aa2fa501ad5b80d6991511069a8ac"} Jan 22 14:15:04 crc kubenswrapper[4769]: I0122 14:15:04.074997 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd2b5f156dae02c64f866098caf245e8a4aa2fa501ad5b80d6991511069a8ac" Jan 22 14:15:04 crc kubenswrapper[4769]: I0122 14:15:04.074829 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-6d7tp" Jan 22 14:15:13 crc kubenswrapper[4769]: I0122 14:15:13.883692 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:15:14 crc kubenswrapper[4769]: I0122 14:15:14.161878 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"1b539b91007d2422b8c1024f2341ad6f8d19130ea18927a43a72cc443195739d"} Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.823339 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:17 crc kubenswrapper[4769]: E0122 14:15:17.824665 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92b152a-d52b-485d-ac4d-1a1d7aeb2860" containerName="collect-profiles" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.824686 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92b152a-d52b-485d-ac4d-1a1d7aeb2860" containerName="collect-profiles" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.825030 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92b152a-d52b-485d-ac4d-1a1d7aeb2860" containerName="collect-profiles" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.827117 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.850874 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.892482 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.892562 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.892659 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brpz\" (UniqueName: \"kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.994401 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.994465 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.994561 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brpz\" (UniqueName: \"kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.995716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:17 crc kubenswrapper[4769]: I0122 14:15:17.996203 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.016046 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brpz\" (UniqueName: \"kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz\") pod \"community-operators-w59m8\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.151162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.427404 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.437025 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.481499 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.504184 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.606255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.606377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48knx\" (UniqueName: \"kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.606457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.707756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.708105 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48knx\" (UniqueName: \"kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.708196 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.708301 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.708547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.728208 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48knx\" (UniqueName: \"kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx\") pod \"redhat-operators-rjx2h\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:18 crc kubenswrapper[4769]: I0122 14:15:18.781235 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:19 crc kubenswrapper[4769]: I0122 14:15:19.203172 4769 generic.go:334] "Generic (PLEG): container finished" podID="89d8ffac-0cb1-44e1-96ea-49d6d2509769" containerID="1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0" exitCode=0 Jan 22 14:15:19 crc kubenswrapper[4769]: I0122 14:15:19.203316 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerDied","Data":"1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0"} Jan 22 14:15:19 crc kubenswrapper[4769]: I0122 14:15:19.205784 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:15:19 crc kubenswrapper[4769]: I0122 14:15:19.206047 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerStarted","Data":"84c238234d2108930635a90652b12b24d8ae96240a5d21a6d053800640059ac4"} Jan 22 14:15:19 crc kubenswrapper[4769]: I0122 14:15:19.261860 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:19 crc kubenswrapper[4769]: W0122 14:15:19.273881 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d77346e_c3d4_43e4_884a_46f2ae336e47.slice/crio-5dcd8a7cf4e2ee8d5a4145e9f3f622fffcf740f9e57227c0646fd565d73493e7 WatchSource:0}: Error finding container 5dcd8a7cf4e2ee8d5a4145e9f3f622fffcf740f9e57227c0646fd565d73493e7: Status 404 returned error can't find the container with id 5dcd8a7cf4e2ee8d5a4145e9f3f622fffcf740f9e57227c0646fd565d73493e7 Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.215933 4769 generic.go:334] "Generic (PLEG): container finished" podID="9d77346e-c3d4-43e4-884a-46f2ae336e47" containerID="78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84" exitCode=0 Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.216085 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerDied","Data":"78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84"} Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.216684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerStarted","Data":"5dcd8a7cf4e2ee8d5a4145e9f3f622fffcf740f9e57227c0646fd565d73493e7"} Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.816294 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.818410 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.832386 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.959584 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d576c\" (UniqueName: \"kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.960003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:20 crc kubenswrapper[4769]: I0122 14:15:20.960109 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.061457 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d576c\" (UniqueName: \"kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.061574 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.061653 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.062181 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.062236 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.098892 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d576c\" (UniqueName: \"kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c\") pod \"certified-operators-45z4x\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.147480 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.225182 4769 generic.go:334] "Generic (PLEG): container finished" podID="89d8ffac-0cb1-44e1-96ea-49d6d2509769" containerID="e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db" exitCode=0 Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.225229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerDied","Data":"e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db"} Jan 22 14:15:21 crc kubenswrapper[4769]: I0122 14:15:21.751656 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.246265 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerStarted","Data":"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae"} Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.248625 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerStarted","Data":"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082"} Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.251442 4769 generic.go:334] "Generic (PLEG): container finished" podID="a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" containerID="57234f418d799a66b12c9e0541d981690336edf3570e905842072e2e297f2899" exitCode=0 Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.251501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerDied","Data":"57234f418d799a66b12c9e0541d981690336edf3570e905842072e2e297f2899"} Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.251541 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerStarted","Data":"67ed005932bf8b8477b9fbe817f754428a2b3e5f690bf3592dd9f06f0f23fea7"} Jan 22 14:15:22 crc kubenswrapper[4769]: I0122 14:15:22.303978 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w59m8" podStartSLOduration=2.857024941 podStartE2EDuration="5.303962742s" podCreationTimestamp="2026-01-22 14:15:17 +0000 UTC" firstStartedPulling="2026-01-22 14:15:19.205477387 +0000 UTC m=+1898.616587306" lastFinishedPulling="2026-01-22 14:15:21.652415168 +0000 UTC m=+1901.063525107" observedRunningTime="2026-01-22 14:15:22.302849032 +0000 UTC m=+1901.713958981" watchObservedRunningTime="2026-01-22 14:15:22.303962742 +0000 UTC m=+1901.715072671" Jan 22 14:15:23 crc kubenswrapper[4769]: I0122 14:15:23.262200 4769 generic.go:334] "Generic (PLEG): container finished" podID="9d77346e-c3d4-43e4-884a-46f2ae336e47" containerID="98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae" exitCode=0 Jan 22 14:15:23 crc kubenswrapper[4769]: I0122 14:15:23.262383 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerDied","Data":"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae"} Jan 22 14:15:23 crc kubenswrapper[4769]: I0122 14:15:23.266722 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerStarted","Data":"b638d671eaa76c731aa18d43a874c2270525877a42a1d8a9b9a2f4c1d848a51d"} Jan 22 14:15:24 crc kubenswrapper[4769]: I0122 14:15:24.278762 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerStarted","Data":"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037"} Jan 22 14:15:24 crc kubenswrapper[4769]: I0122 14:15:24.280941 4769 generic.go:334] "Generic (PLEG): container finished" podID="a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" containerID="b638d671eaa76c731aa18d43a874c2270525877a42a1d8a9b9a2f4c1d848a51d" exitCode=0 Jan 22 14:15:24 crc kubenswrapper[4769]: I0122 14:15:24.281032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerDied","Data":"b638d671eaa76c731aa18d43a874c2270525877a42a1d8a9b9a2f4c1d848a51d"} Jan 22 14:15:24 crc kubenswrapper[4769]: I0122 14:15:24.335254 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjx2h" podStartSLOduration=2.871125118 podStartE2EDuration="6.33523238s" podCreationTimestamp="2026-01-22 14:15:18 +0000 UTC" firstStartedPulling="2026-01-22 14:15:20.217583951 +0000 UTC m=+1899.628693870" lastFinishedPulling="2026-01-22 14:15:23.681691193 +0000 UTC m=+1903.092801132" observedRunningTime="2026-01-22 14:15:24.30917442 +0000 UTC m=+1903.720284349" watchObservedRunningTime="2026-01-22 14:15:24.33523238 +0000 UTC m=+1903.746342309" Jan 22 14:15:25 crc kubenswrapper[4769]: I0122 14:15:25.290300 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerStarted","Data":"c12c44e352bd542c5f287415ea01717149d031bccd094d9d183c7957a4828fa2"} Jan 22 14:15:25 crc kubenswrapper[4769]: I0122 14:15:25.309671 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-45z4x" podStartSLOduration=2.8847484249999997 podStartE2EDuration="5.309654246s" podCreationTimestamp="2026-01-22 14:15:20 +0000 UTC" firstStartedPulling="2026-01-22 14:15:22.25364652 +0000 UTC m=+1901.664756459" lastFinishedPulling="2026-01-22 14:15:24.678552341 +0000 UTC m=+1904.089662280" observedRunningTime="2026-01-22 14:15:25.307480397 +0000 UTC m=+1904.718590326" watchObservedRunningTime="2026-01-22 14:15:25.309654246 +0000 UTC m=+1904.720764175" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.151431 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.151986 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.200742 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.359392 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.781833 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:28 crc kubenswrapper[4769]: I0122 14:15:28.781880 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:29 crc kubenswrapper[4769]: I0122 14:15:29.604851 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:29 crc kubenswrapper[4769]: I0122 14:15:29.833482 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rjx2h" podUID="9d77346e-c3d4-43e4-884a-46f2ae336e47" containerName="registry-server" probeResult="failure" output=< Jan 22 14:15:29 crc kubenswrapper[4769]: timeout: failed to connect service ":50051" within 1s Jan 22 14:15:29 crc kubenswrapper[4769]: > Jan 22 14:15:30 crc kubenswrapper[4769]: I0122 14:15:30.340178 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w59m8" podUID="89d8ffac-0cb1-44e1-96ea-49d6d2509769" containerName="registry-server" containerID="cri-o://f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082" gracePeriod=2 Jan 22 14:15:31 crc kubenswrapper[4769]: I0122 14:15:31.148375 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:31 crc kubenswrapper[4769]: I0122 14:15:31.148696 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:31 crc kubenswrapper[4769]: I0122 14:15:31.347949 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:31 crc kubenswrapper[4769]: I0122 14:15:31.403821 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:31 crc kubenswrapper[4769]: I0122 14:15:31.931347 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.083237 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brpz\" (UniqueName: \"kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz\") pod \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.083374 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities\") pod \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.083417 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content\") pod \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\" (UID: \"89d8ffac-0cb1-44e1-96ea-49d6d2509769\") " Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.084244 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities" (OuterVolumeSpecName: "utilities") pod "89d8ffac-0cb1-44e1-96ea-49d6d2509769" (UID: "89d8ffac-0cb1-44e1-96ea-49d6d2509769"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.088701 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz" (OuterVolumeSpecName: "kube-api-access-4brpz") pod "89d8ffac-0cb1-44e1-96ea-49d6d2509769" (UID: "89d8ffac-0cb1-44e1-96ea-49d6d2509769"). InnerVolumeSpecName "kube-api-access-4brpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.157166 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89d8ffac-0cb1-44e1-96ea-49d6d2509769" (UID: "89d8ffac-0cb1-44e1-96ea-49d6d2509769"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.185920 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.185968 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d8ffac-0cb1-44e1-96ea-49d6d2509769-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.185986 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brpz\" (UniqueName: \"kubernetes.io/projected/89d8ffac-0cb1-44e1-96ea-49d6d2509769-kube-api-access-4brpz\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.361302 4769 generic.go:334] "Generic (PLEG): container finished" podID="89d8ffac-0cb1-44e1-96ea-49d6d2509769" containerID="f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082" exitCode=0 Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.362208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerDied","Data":"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082"} Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.362239 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w59m8" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.362263 4769 scope.go:117] "RemoveContainer" containerID="f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.362251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w59m8" event={"ID":"89d8ffac-0cb1-44e1-96ea-49d6d2509769","Type":"ContainerDied","Data":"84c238234d2108930635a90652b12b24d8ae96240a5d21a6d053800640059ac4"} Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.386160 4769 scope.go:117] "RemoveContainer" containerID="e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.402709 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.409834 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w59m8"] Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.426644 4769 scope.go:117] "RemoveContainer" containerID="1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.455275 4769 scope.go:117] "RemoveContainer" containerID="f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082" Jan 22 14:15:32 crc kubenswrapper[4769]: E0122 14:15:32.455745 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082\": container with ID starting with f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082 not found: ID does not exist" containerID="f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.455852 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082"} err="failed to get container status \"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082\": rpc error: code = NotFound desc = could not find container \"f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082\": container with ID starting with f8336f2a7381e30091966db141df9897f5480ba82dcfac91944940d5b2c75082 not found: ID does not exist" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.455935 4769 scope.go:117] "RemoveContainer" containerID="e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db" Jan 22 14:15:32 crc kubenswrapper[4769]: E0122 14:15:32.456342 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db\": container with ID starting with e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db not found: ID does not exist" containerID="e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.456441 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db"} err="failed to get container status \"e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db\": rpc error: code = NotFound desc = could not find container \"e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db\": container with ID starting with e2e39f76126f8bbd40b5a4f1c173b388f13f91e1cca3fcaab1958b4eece7c7db not found: ID does not exist" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.456503 4769 scope.go:117] "RemoveContainer" containerID="1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0" Jan 22 14:15:32 crc kubenswrapper[4769]: E0122 14:15:32.456831 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0\": container with ID starting with 1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0 not found: ID does not exist" containerID="1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.456870 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0"} err="failed to get container status \"1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0\": rpc error: code = NotFound desc = could not find container \"1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0\": container with ID starting with 1fe3209be1f2e07b6f1c6f971be336e73063e24a90efee784205b5fed64590b0 not found: ID does not exist" Jan 22 14:15:32 crc kubenswrapper[4769]: I0122 14:15:32.903629 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d8ffac-0cb1-44e1-96ea-49d6d2509769" path="/var/lib/kubelet/pods/89d8ffac-0cb1-44e1-96ea-49d6d2509769/volumes" Jan 22 14:15:33 crc kubenswrapper[4769]: I0122 14:15:33.607522 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:33 crc kubenswrapper[4769]: I0122 14:15:33.607916 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-45z4x" podUID="a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" containerName="registry-server" containerID="cri-o://c12c44e352bd542c5f287415ea01717149d031bccd094d9d183c7957a4828fa2" gracePeriod=2 Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.408431 4769 generic.go:334] "Generic (PLEG): container finished" podID="a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" containerID="c12c44e352bd542c5f287415ea01717149d031bccd094d9d183c7957a4828fa2" exitCode=0 Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.408544 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerDied","Data":"c12c44e352bd542c5f287415ea01717149d031bccd094d9d183c7957a4828fa2"} Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.615115 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.751610 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities\") pod \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.751771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d576c\" (UniqueName: \"kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c\") pod \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.751821 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content\") pod \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\" (UID: \"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305\") " Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.753482 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities" (OuterVolumeSpecName: "utilities") pod "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" (UID: "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.756984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c" (OuterVolumeSpecName: "kube-api-access-d576c") pod "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" (UID: "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305"). InnerVolumeSpecName "kube-api-access-d576c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.803400 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" (UID: "a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.854109 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d576c\" (UniqueName: \"kubernetes.io/projected/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-kube-api-access-d576c\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.854164 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:34 crc kubenswrapper[4769]: I0122 14:15:34.854190 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.423093 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45z4x" event={"ID":"a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305","Type":"ContainerDied","Data":"67ed005932bf8b8477b9fbe817f754428a2b3e5f690bf3592dd9f06f0f23fea7"} Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.423184 4769 scope.go:117] "RemoveContainer" containerID="c12c44e352bd542c5f287415ea01717149d031bccd094d9d183c7957a4828fa2" Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.423623 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45z4x" Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.447304 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.449626 4769 scope.go:117] "RemoveContainer" containerID="b638d671eaa76c731aa18d43a874c2270525877a42a1d8a9b9a2f4c1d848a51d" Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.456219 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-45z4x"] Jan 22 14:15:35 crc kubenswrapper[4769]: I0122 14:15:35.466000 4769 scope.go:117] "RemoveContainer" containerID="57234f418d799a66b12c9e0541d981690336edf3570e905842072e2e297f2899" Jan 22 14:15:36 crc kubenswrapper[4769]: I0122 14:15:36.896024 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305" path="/var/lib/kubelet/pods/a8e33a9b-67b4-4a7d-b205-3ff0c8e6b305/volumes" Jan 22 14:15:38 crc kubenswrapper[4769]: I0122 14:15:38.831091 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:38 crc kubenswrapper[4769]: I0122 14:15:38.876985 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:39 crc kubenswrapper[4769]: I0122 14:15:39.204248 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:40 crc kubenswrapper[4769]: I0122 14:15:40.486149 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjx2h" podUID="9d77346e-c3d4-43e4-884a-46f2ae336e47" containerName="registry-server" containerID="cri-o://8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037" gracePeriod=2 Jan 22 14:15:40 crc kubenswrapper[4769]: I0122 14:15:40.941302 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.062962 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities\") pod \"9d77346e-c3d4-43e4-884a-46f2ae336e47\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.063104 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content\") pod \"9d77346e-c3d4-43e4-884a-46f2ae336e47\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.063224 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48knx\" (UniqueName: \"kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx\") pod \"9d77346e-c3d4-43e4-884a-46f2ae336e47\" (UID: \"9d77346e-c3d4-43e4-884a-46f2ae336e47\") " Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.063947 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities" (OuterVolumeSpecName: "utilities") pod "9d77346e-c3d4-43e4-884a-46f2ae336e47" (UID: "9d77346e-c3d4-43e4-884a-46f2ae336e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.070019 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx" (OuterVolumeSpecName: "kube-api-access-48knx") pod "9d77346e-c3d4-43e4-884a-46f2ae336e47" (UID: "9d77346e-c3d4-43e4-884a-46f2ae336e47"). InnerVolumeSpecName "kube-api-access-48knx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.165401 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48knx\" (UniqueName: \"kubernetes.io/projected/9d77346e-c3d4-43e4-884a-46f2ae336e47-kube-api-access-48knx\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.165451 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.180723 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d77346e-c3d4-43e4-884a-46f2ae336e47" (UID: "9d77346e-c3d4-43e4-884a-46f2ae336e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.267319 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d77346e-c3d4-43e4-884a-46f2ae336e47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.496715 4769 generic.go:334] "Generic (PLEG): container finished" podID="9d77346e-c3d4-43e4-884a-46f2ae336e47" containerID="8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037" exitCode=0 Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.496764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerDied","Data":"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037"} Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.496771 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjx2h" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.496811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjx2h" event={"ID":"9d77346e-c3d4-43e4-884a-46f2ae336e47","Type":"ContainerDied","Data":"5dcd8a7cf4e2ee8d5a4145e9f3f622fffcf740f9e57227c0646fd565d73493e7"} Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.496833 4769 scope.go:117] "RemoveContainer" containerID="8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.523661 4769 scope.go:117] "RemoveContainer" containerID="98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.547684 4769 scope.go:117] "RemoveContainer" containerID="78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.611005 4769 scope.go:117] "RemoveContainer" containerID="8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037" Jan 22 14:15:41 crc kubenswrapper[4769]: E0122 14:15:41.612297 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037\": container with ID starting with 8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037 not found: ID does not exist" containerID="8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.612351 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037"} err="failed to get container status \"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037\": rpc error: code = NotFound desc = could not find container \"8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037\": container with ID starting with 8221790b4c9ab4056175da11201bb348ad9712f1ae5d6572e41ecb8e3932c037 not found: ID does not exist" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.612379 4769 scope.go:117] "RemoveContainer" containerID="98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae" Jan 22 14:15:41 crc kubenswrapper[4769]: E0122 14:15:41.612933 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae\": container with ID starting with 98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae not found: ID does not exist" containerID="98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.612960 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae"} err="failed to get container status \"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae\": rpc error: code = NotFound desc = could not find container \"98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae\": container with ID starting with 98678ace4234205b281cb29c8f2b7e40ec6299f0e377b57a17c6ee94a1f915ae not found: ID does not exist" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.612979 4769 scope.go:117] "RemoveContainer" containerID="78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84" Jan 22 14:15:41 crc kubenswrapper[4769]: E0122 14:15:41.613261 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84\": container with ID starting with 78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84 not found: ID does not exist" containerID="78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.613289 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84"} err="failed to get container status \"78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84\": rpc error: code = NotFound desc = could not find container \"78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84\": container with ID starting with 78569a02355191754b78509ff5f795c5a489b088ebb5a909cbf7c91e86823f84 not found: ID does not exist" Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.615291 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:41 crc kubenswrapper[4769]: I0122 14:15:41.625782 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjx2h"] Jan 22 14:15:42 crc kubenswrapper[4769]: I0122 14:15:42.899375 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d77346e-c3d4-43e4-884a-46f2ae336e47" path="/var/lib/kubelet/pods/9d77346e-c3d4-43e4-884a-46f2ae336e47/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134430436024450 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134430436017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134424265016513 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134424265015463 5ustar corecore